Big Data Engineer Role Overview
We are looking for an experienced professional to join our team as a Big Data Engineer. The successful candidate will be responsible for designing and implementing robust data pipelines and storage solutions.
Key Responsibilities
* Design, build, and optimize data pipelines using Apache Spark (Python and/or Scala) to process large-scale batch and streaming datasets.
* Utilize REST APIs to retrieve and integrate external data in a secure and efficient manner.
* Collaborate with data scientists and engineers to ensure high-quality data processing and implementation of CI/CD pipelines.
* Organize and manage data in on-prem object storage, promoting data governance awareness throughout the organization.
Requirements
* Bachelor's or master's degree in Computer Science, Engineering, or a related field is required.
* A minimum of two to five years of experience as a Data Engineer in Big Data environments is essential.
* Strong skills in Apache Spark (Python and/or Scala), SQL, and data integration are crucial for success in this position.
* Fluency in French and English (minimum B2 level) is expected.
* A proactive, detail-oriented, and strong communicator is ideal for this role.