As a Data Engineer, you will be responsible for designing and implementing robust data pipelines and storage solutions that meet business requirements.
1. Main Responsibilities:
* Data Pipeline Development: Design and optimize data pipelines using Apache Spark to process large-scale batch and streaming datasets.
* External Data Integration: Work with REST APIs to retrieve and integrate external data into our systems.
* Agile Team Collaboration: Collaborate with data scientists and engineers in Agile teams to ensure seamless communication and efficient project execution.
* Quality and Testing: Ensure data quality, testing, and monitoring are implemented and maintained to meet business requirements.
* Automation and CI/CD: Contribute to the implementation of automation best practices and CI/CD pipelines to improve efficiency and reduce errors.
Requirements:
* Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
* Experience: 2-5 years of experience as a Data Engineer in Big Data environments.
* Skills: Strong skills in Apache Spark, SQL, and data integration. Comfortable with Git, Airflow, and CI/CD pipelines.
* Language: Fluent in English (minimum B2 level).
* Priorities: Proactive, detail-oriented, and a strong communicator.
Benefits:
We offer a dynamic work environment and opportunities for professional growth and development.