Your mission
We are looking for a Go Platform Engineer who thrives at the intersection of infrastructure, AI systems, and DevOps. In this role, you will architect and scale the backbone of our AI Platform: Ensuring high availability, low latency, and seamless integration of machine learning capabilities into production. You will own the microservices that power AI inference, build robust multi-tenant infrastructure, and support our Data & AI team with production-grade DevOps practices.
Your responsibilities:
Design, build, and maintain Go microservices that handle AI model inference, data processing pipelines, and real-time streaming workflows.
Architect scalable APIs (gRPC/REST) that serve as the bridge between AI models and production applications.
Own the Kubernetes infrastructure (EKS), including deployments, autoscaling policies, service mesh, and cluster health monitoring.
Implement service-to-service communication using gRPC and message queues (RabbitMQ/SQS) for asynchronous processing.
Integrate with cloud AI services (AWS Bedrock, OpenAI, Anthropic) and manage model serving infrastructure.
Build multi-tenant capabilities including authentication (JWT/JWKS), rate limiting, usage tracking, and tenant isolation.
Partner with the Data & AI team to productionize machine learning models—wrapping them in production-ready services with proper health checks, circuit breakers, and graceful degradation.
Build comprehensive observability: structured logging, metrics (Prometheus), distributed tracing (Jaeger/Tempo), and alerting.
Implement CI/CD pipelines and infrastructure-as-code (Terraform) for automated deployments and disaster recovery.
Ensure high availability through proper monitoring, incident response, and post-mortem analysis.
Optimize resource utilization for GPU workloads and cost-efficient scaling strategies.
Your profile
Go Expertise: 3+ years of professional Go development experience with strong understanding of concurrency patterns, interfaces, channels, and error handling.
Kubernetes Production Experience: 3+ years managing production Kubernetes clusters, including deployments, services, ingress controllers, resource management, and troubleshooting.
Distributed Systems Knowledge: Deep understanding of CAP theorem, eventual consistency, idempotency, circuit breakers, and fault-tolerant design.
gRPC & Async Messaging: Hands-on experience with gRPC/Protocol Buffers and message queues (RabbitMQ, SQS, Kafka) in production systems.
Cloud Platform Experience: Strong experience with AWS services (EKS, S3, DynamoDB, Lambda) or equivalent cloud providers.
DevOps Mindset: Experience with Docker, CI/CD pipelines, infrastructure-as-code, and GitOps workflows.
Spoken language: You communicate confidently in English (C1 level); German skills are a plus.
Why us?
A responsible task with meaning: We build software to digitize the social care sector and thus enable our customers to focus on a better life for their clients by giving them more time for care & support
A remote working time model to keep your everyday life flexible
Exciting, challenging tasks in a dynamic, future-oriented environment
A culture of appreciation and a harmonious working atmosphere in a growing, international company with opportunities to get involved
A creative working environment, flat hierarchies and short decision-making processes
Attractive remuneration models, a permanent employment contract
contact information
If this sounds like you, we look forward to receiving your application including your earliest possible start date, through our online application form
About Us
Welcome to myneva - together, we shape digital care.
myneva is one of the leading European software providers for the social sector. Our solutions focus on shaping the world around our clients and their needs. By digitising processes, we help care givers gain more time to support their clients, enabling them to enjoy a better quality of life.
As an ambitious team, we are pursuing increasing internationalisation and a clear mission to become #1 in Europe.