At the DLR Institute for AI Safety and Security, we research and develop AI-related methods, processes, algorithms, technologies and execution and system environments. Our focus is on safe and standard-compliant AI, cybersecurity in open data and service ecosystems and AI, as well as automation in mobility and logistics. ## What to expect The AI Engineering department defines and establishes systematic and goal-orientated development processes, methods and tools for safe and secure AI-based applications. The focus is on the development and application of safety and security-by-design methods for AI-based components and the investigation of safe human-AI interaction, taking into account the degree of automation and cooperation. ## Your tasks - Development of resilience criteria and associated methods for their Evaluation - Development of new and innovative methods for attack prevention, detection, mitigation and recovery - Development of a simulation and test environment for cyber-resilient robotic systems - Preparation of scientific contributions for publications and presentations ## Your profile - Completed scientific university degree (Master / Diploma University) in computer science (e.g. in the field of artificial intelligence), engineering (e.g. in the field of electrical engineering, mechanical engineering), mathematics, physics or other degree relevant to the position - Programming skills in Python - Knowledge in the field of robotic systems - Knowledge in the field of cyber / AI security - Knowledge and practical experience in the area of building and implementing AI components/systems - Experience in the use of simulation tools - Confident written and spoken English skills We look forward to getting to know you! If you have any questions about this position (Vacancy-ID 4531) please contact: Dr. Sven Hallerbach Tel.: +49 731 400198 315