Help us develop big data applications using Spark, Kafka, Storm, Hive, Hadoop and Elastic Search. Write modules to analyze big data. Analyze big data elements to determine what areas require coding using Scala, Python or Java. Support and monitor code implementation by users to ensure there are no bugs, security concerns, or inconsistencies. Integrate external data sources and APIs.
Responsibilities
Design overall architecture of the data pipelines.
Maintain quality and ensure performance of applications.
Collaborate with the rest of the engineering team to design and launch new features.
Maintain code integrity and organization.
Nice to Have
Experience with cloud (AWS, GCP or Azure)
Experience with traditional BI tools (Cognos, Qlik, ...)
Experience with databases like Cassandra, Snowflake, Oracle, MS SQL
Experience with distributed file systems like Hadoop
Experience building UI (HTML/CSS, javascript and frameworks like Angular, Vue, React, ...)
Must Have
At least 2 years of experience with Scala, Python or core Java
Knowledge of Design Patterns & Data structures
Positive can-do mentality
Benefits
Pension Fund, Health insurance and guaranteed income, Profit share clause and bonus
PTOs
32 days of paid holidays
Save on commute
Projects close to your home
Eat & Drink
Monthly allowance for eat & drinks (meal vouchers)