Job Overview:
* We are searching for an expert data developer with solid experience in large-scale data processing, particularly using Apache Spark.
Responsibilities:
* Design and implement robust data pipelines and storage systems to meet the needs of business and technical stakeholders.
* Build and optimize data pipelines with Apache Spark (Python and/or Scala).
* Process and analyze large datasets using various tools and techniques.
* Collaborate with data scientists and engineers in Agile teams to achieve project goals.
* Ensure data quality, integrity, and monitoring.
Profile:
* Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
* 2 to 5 years of experience as a Data Engineer in Big Data environments.
* Strong skills in Apache Spark (Python and/or Scala), SQL, and data integration.
* Proficiency in Git, Airflow, and CI/CD pipelines.
* Experience with REST APIs and object storage (S3/MinIO).
* Awareness of data governance topics: data lineage, metadata, PII, data contracts…
* Fluent in French and English (minimum B2 level).
* Proactive, detail-oriented, and a strong communicator.
Key Skills:
* Apache Spark (Python and/or Scala)
* SQL
* Data Integration
* Git
* Airflow
* CI/CD Pipelines
* REST APIs
* Object Storage (S3/MinIO)
Benefits:
* Opportunity to work on complex data projects.
* Chance to collaborate with experienced data professionals.
* Professional growth and development opportunities.
Requirements:
1. Ability to work independently and collaboratively in a team environment.
2. Excellent problem-solving and analytical skills.
3. Strong communication and interpersonal skills.
4. Ability to adapt to changing project requirements.
How to Apply:
Please submit your resume and cover letter to us.