Job Title: AI Security Expert
We are seeking a seasoned professional to lead our efforts in securing Generative AI solutions.
About the Role:
* You will be responsible for developing and implementing security policies and controls for Generative AI solutions.
* Assess risks related to AI-generated content, model hallucination, prompt injection, data leakage, and adversarial inputs.
* Oversee and monitor the secure usage of foundation models, APIs, and locally deployed LLMs.
* Collaborate with DevOps and AI engineers to implement security gates in AI model training, deployment pipelines, and runtime environments.
* Evaluate the secure integration of GenAI platforms into enterprise systems and protect access to sensitive or regulated data.
* Perform regular threat modelling and security assessments of AI-based architectures.
* Establish an internal AI usage governance framework.
* Monitor evolving regulatory landscapes and advise on necessary compliance actions.
* Support detection, response, and forensics for GenAI-related security incidents.
About You:
* You hold a university degree in IT, Cybersecurity, or a related field, or have equivalent professional qualifications.
* You have at least 2 years of experience working in cybersecurity, preferably with exposure to AI and ML systems or advanced data analytics environments.
* You possess strong knowledge of AI/ML security concepts, including model threats, data poisoning, and LLM misuse scenarios.
* You are familiar with AI development and deployment workflows and have experience with security tools and frameworks.
* You are proactive in staying updated on emerging AI security threats, open research, and policy developments.
* You are comfortable working closely with developers, architects, data scientists, and legal/compliance teams.