This role requires a deep understanding of AI security principles and best practices. A challenging position in the field of Artificial Intelligence (AI) security has arisen.
As an AI Security Expert, you will play a crucial role in ensuring the secure development and deployment of AI solutions. This involves developing, implementing, and maintaining security policies and controls for Generative AI solutions.
Responsibilities include assessing risks related to AI-generated content, model hallucination, prompt injection, data leakage, and adversarial inputs, and defining appropriate mitigation strategies.
Key Skills:
* Developing and implementing security policies and controls for AI solutions.
* Assessing risks related to AI-generated content and defining mitigation strategies.
* Collaborating with DevOps and AI engineers to implement security gates in AI model training, deployment pipelines, and runtime environments.
* Ensuring secure integration of GenAI platforms into enterprise systems.
* Conducting regular threat modelling and security assessments of AI-based architectures.
* Contributing to establishing an internal AI usage governance framework.
* Monitoring evolving regulatory landscapes and advising on necessary compliance actions.
Requirements:
* University degree in IT, Cybersecurity, or a related field.
* At least 2 years of experience working in cybersecurity, preferably with exposure to AI and ML systems or advanced data analytics environments.
* Strong knowledge of AI/ML security concepts, including model threats, data poisoning, and LLM misuse scenarios.
* Experience with security tools and frameworks for cloud-native and GenAI environments.
* Ability to stay updated on emerging AI security threats, open research, and policy developments.
The ideal candidate is proactive, comfortable working closely with developers, architects, data scientists, and legal/compliance teams. Excellent written and verbal communication skills in English are essential.