Generative AI Red Team Engineer

Job title:

Generative AI Red Team Engineer

Company:

Sigma.AI

Job description

We are seeking a highly skilled and innovative Red Team Engineer with expertise in finding Generative AI vulnerabilities to join our adversarial testing team. The ideal candidate will have a strong background in red teaming, adversarial attacks, and generative AI, particularly in testing the robustness and security of large-scale generative models. This role will focus on identifying vulnerabilities, ethical risks, and adversarial weaknesses in AI systems used for tasks such as natural language generation, and other AI-driven applications. Deliverables for this role include the building of a prompt dataset, research and reporting on the evaluation of several generative AI foundation models, and the building of a training program around red teaming.You will collaborate with AI researchers, product managers, and other engineers to proactively test and improve the resilience of our generative AI systems against real-world threats, including prompt injection attacks, data poisoning, and bias exploitation. You will also play a key role in driving red teaming best practices, ethical alignment, and safeguarding the integrity of generative AI models.Key Responsibilities:

  • Adversarial planning and testing: Design, plan, and execute red teaming assessments focused on generative AI models to simulate adversarial attacks, prompt injections, and other potential misuse scenarios.
  • Threat Emulation: Conduct threat emulation and create real-world attack scenarios for generative AI models, focusing on vulnerabilities such as data poisoning, model drift, and ethical boundary violations.
  • Collaborate with AI Teams: Work closely with machine learning engineers, data scientists, product managers, and AI researchers to evaluate model performance under adversarial conditions and provide actionable recommendations for strengthening AI defenses.
  • Ethical Testing & Bias Audits: Evaluate AI models for potential ethical concerns, including bias detection and unintended harmful behavior, and work to align AI systems with ethical guidelines.
  • Documentation & Reporting: Produce detailed reports outlining identified vulnerabilities, exploit scenarios, and recommendations for improvements, including post-mortems of red teaming exercises.
  • Creation of a training program: Develop in collaboration with project managers and Machine learning engineers a training program to train and upskill a team that would be able to carry out red teaming assessments.
  • Stay Current: Stay up-to-date on cutting-edge AI security research, adversarial machine learning techniques, and ethical AI frameworks to ensure robust red teaming practices.

Qualifications:

  • Education:
  • Advanced degree (e.g. Master’s degree or PhD) in Computer Science, Machine Learning, Cybersecurity, or a related field. Equivalent work experience will also be considered.
  • Experience:
  • 2+ years of experience in red teaming with at least one year spent on the evaluation of generative AI models (e.g., natural language processing, image generation) and the security challenges they present.
  • Proven track record of conducting adversarial attacks and identifying vulnerabilities in AI models.
  • Technical Skills:
  • Strong programming skills in languages such as Python and familiarity with machine learning libraries and adversarial prompt datasets.
  • Experience with adversarial machine learning techniques, including prompt injections, model poisoning, and data exfiltration.
  • Experience with AI ethics and bias testing in model outputs.
  • Other Skills:
  • Excellent problem-solving skills with the ability to think like an adversary and design creative attack strategies.
  • Effective communication skills to explain complex AI vulnerabilities to stakeholders and provide clear, actionable recommendations.

Preferred Qualifications:

  • Knowledge of AI Regulatory Standards: Familiarity with emerging AI governance and security standards, including ethical AI frameworks and AI governance best practices.

Help shape the future of ethical AI.
Learn more about and .

Expected salary

Location

Comunidad de Madrid

Job date

Sat, 23 Nov 2024 23:12:32 GMT

To help us track our recruitment effort, please indicate in your email/cover letter where (vacanciesineu.com) you saw this job posting.

Share
yonnetim

Published by
yonnetim
Tags: phd

Recent Posts

EYFS Teaching Assistant

Job title: EYFS Teaching Assistant Company: Prospero Teaching Job description Prospero Teaching are currently recruiting…

3 minutes ago

PROJECT MANAGER

Job title: PROJECT MANAGER Company: Orienta Job description Orienta S.p.A. ricerca per azienda del settore…

4 minutes ago

TÉCNICO/A DE CALIDAD

Job title: TÉCNICO/A DE CALIDAD Company: IMAN Job description Desde IMAN Temporing Sabadell, empresa especializada…

5 minutes ago

Business Developer Solar Energy (Brussels)

Job title: Business Developer Solar Energy (Brussels) Company: BESIX Job description Description de l'entrepriseBESIX Group…

6 minutes ago

Beauty Advisor (Fragrance) – Liverpool

Location: Liverpool - United Kingdom Salary: Competitive Type: Permanent Main Industry: Search Advertising, Marketing &…

7 minutes ago

Business Processes Consultant – Supply Planning / Integrated Business Planning F/H

Job title: Business Processes Consultant - Supply Planning / Integrated Business Planning F/H Company: SAP…

17 minutes ago
If you dont see Apply Button. Please use Non-Amp Version