Course Outline
Introduction to AI Red Teaming
- Understanding the AI threat landscape
- Roles of red teams in AI security
- Ethical and legal considerations
Adversarial Machine Learning
- Types of attacks: evasion, poisoning, extraction, inference
- Generating adversarial examples (e.g., FGSM, PGD)
- Targeted vs untargeted attacks and success metrics
Testing Model Robustness
- Evaluating robustness under perturbations
- Exploring model blind spots and failure modes
- Stress testing classification, vision, and NLP models
Red Teaming AI Pipelines
- Attack surface of AI pipelines: data, model, deployment
- Exploiting insecure model APIs and endpoints
- Reverse engineering model behavior and outputs
Simulation and Tooling
- Using the Adversarial Robustness Toolbox (ART)
- Red teaming with tools like TextAttack and IBM ART
- Sandboxing, monitoring, and observability tools
AI Red Team Strategy and Defense Collaboration
- Developing red team exercises and goals
- Communicating findings to blue teams
- Integrating red teaming into AI risk management
Summary and Next Steps
Requirements
- An understanding of machine learning and deep learning architectures
- Experience with Python and ML frameworks (e.g., TensorFlow, PyTorch)
- Familiarity with cybersecurity concepts or offensive security techniques
Audience
- Security researchers
- Offensive security teams
- AI assurance and red team professionals
Testimonials (2)
I really enjoyed learning about AI attacks and the tools out there to begin practicing and actively using for security testing. I took a lot of knowledge away which I didn't have at the beginning and the course met what I hoped it would be. My favorite part shown from the training was Comet Browser and was amazed at what it could do. Definitely something will be looking into more. Overall it was a great course and enjoyed learning all OWASP GenAI Top 10.
Patrick Collins - Optum
Course - OWASP GenAI Security
The profesional knolage and the way how he presented it before us