- Be part of a small skilled team working on Data Lake platforms
- Implement reliable data pipelines; optimize and monitor performance
- Build, deploy & maintain data applications
- Work together with data analysts & data scientists on complex projects
- Hands-on development experience with at least one of the programming languages – Scala / Java / Python
- Good SQL knowledge and experience with relational databases (experience with big data engines like Hive, Impala, or Kudu is a plus)
- Experience with building up data pipelines, preferably in Spark
- Linux, bash
- Advanced English
Nice to have
- Experience with Stream processing frameworks – Spark Streaming / Flink / Beam
- Hands-on experience with Hadoop based systems (Cloudera, HPE Data Fabric), Kafka, Elasticsearch, HBase, Cassandra, MongoDB
- Orchestration frameworks (Apache Airflow)
- Knowledge of CI/CD and Unit Testing
- Experience with development in Azure or other cloud platform
Do you want to work on projects with a significant impact on our daily lives and make the world a better place through technology?
We deliver end-to-end data applications for large companies & start-ups all around the world. We do extensive data sets analysis, stream processing, machine learning, IoT, simply applying our skills & know-how to get most of the data we work with to improve the business of our customers coming from logistic, energy, finance, and genetic industries.
We are looking for both of you who are at the very start of your careers and experienced engineers. We are offering competitive salaries, work flexibility, and comprehensive benefits. Want to join us on our journey? Reach us out at email@example.com!