Aufgaben
Design and implement data pipelines for ingesting, transforming, and enriching structured and unstructured content at scale, selecting appropriate technologies.
Build and maintain secure, high end performance backend services and APIs that power document analytics and GenAI applications.
Collaborate with data scientists to productionize AI prototypes and integrate them into enterprise-ready solutions including deployment automation, monitoring and lifecycle management.
Integrate diverse data sources from internal databases, file systems, and third-party systems into the document analytics platform ensuring data quality, compliance and schema evaluation best practices.
Contribute to the architecture of scalable, modular components supporting multi-tenant, cross-business-unit deployment.
Implement monitoring, logging, and alerting mechanisms to ensure the reliability and observability of data and AI services.
Establish engineering best practices and contribute to CI/CD workflows across data pipelines and application services.
Participate in technical discussions and design reviews to align engineering efforts with product and platform goals.
Qualifikationen
Successfully completed university degree in Computer Science or a comparable qualification.
At least 3 years of professional experience in software development with Python, including hands on experience with data processing frameworks
Demonstrated expertise in designing scalable software architectures and applying design patterns, with strong familiarity with modern software engineering practices
Hands-on experience with Microsoft Azure cloud services, including Azure DevOps and related tooling.
Experience in building data processing pipelines and integrating Data Science or AI solutions into production environments is a strong advantage.
Strong analytical and problem-solving skills, with the ability to work independently and collaboratively in a cross-functional team.
Proficiency in English and German