TasksCo-develop the core architecture for distributed AI inference on programmable network nodes (switches, SmartNICs, heterogeneous HW accelerators in base stations).Design and optimize AI models (DNNs, LLMs, etc.) for low-latency, resource-constrained, and energy-efficient execution.Lead the hardware/software co-design of custom AI accelerators targeting reconfigurable architectures (FPGAs, etc.).Contribute to synthesis, optimization, and deployment of AI accelerators.Design scalable scheduling, orchestration, and dynamic resource allocation algorithms for distributed and edge AI execution.Understand and integrate with 5G/6G telecommunication architectures and protocols (RAN, core, MEC, etc.).Lead the technical validation and prototyping with early adopters.Collaborate on grant writing, product roadmap, and tech strategy.Hire and mentor future engineers and researchers as the team grows.RequirementsBased in Germany, with a valid Niederlassungserlaubnis or German citizenship.A Master's or PhD in Computer Science, Electrical Engineering, or related fields.Proven ability to build systems end-to-end: from prototype to deployable demo.Interest in energy efficiency, sustainability, and impactful technology.Fluent in English; German is a bonus.Strong background in at least two of the following domains (and basic understanding of all three):Machine Learning (SW side):Optimizing DNNs, LLMs, or other AI models for embedded / edge devices.Quantization, pruning, knowledge distillation, or other optimization methods.Hardware Design:Designing and synthesizing AI accelerators for FPGAs or custom ASICs.Circuit-level optimization, High-Level Synthesis (HLS), or RTL design experience.Networking / Telecommunication:Understanding of the 5G/6G stack, from radio access to packet processing.Experience in distributed computing frameworks and edge/cloud orchestration.BenefitsNon-paid job at the moment, equity share at the time of founding the company is considered. #J-18808-Ljbffr