
Exploiting AI
is an introductory class into understanding the security risks that come with AI and how to mitigate those security risks. After going through this course material you should have a good grasp of the foundations of AI as well as how to exploit it, and prevent exploitation.
Disclaimer:
Before you can continue you need to have the following specs. 8 GB RAM, 4 Core CPU, 40 GB Storage, Failure to properly provision Virtual Machine will cause failure during install.
Course Pre-requisites
Course Information
Labs and Content
Learning the Basics
📒 01-AIOV - What is AI and LLM
📒 01.2-AILB - Terminology and Attack Surfaces
Attack Surfaces and Remediations
🥼 02.3-AILB - Containment Breach
🧠 02.6-AIOV - Preventing Prompt Injection
📒 03-AIOV - Data Poisoning and Refining
🥼 03.1-AILB - Training a spam classifier
🥼 03.2-AILB - Training a network traffic classification system
🧠 03.3-AIOV - Preventing Data Poisoning
📒 04-AIOV - Model Inversion Attack
🥼 04.1-AILB - Inferring Information Using a Loan Assessment AI
🧠 04.2-AIOV - Preventing Model Inversion Attacks
📒 05-AIOV - Transfer Model Attack Overview
🥼 05.1-AILB - Attacking Two Models with one Prompt
🧠 05.2-AIOV - Preventing Transfer Model Attacks
📒 05-AIOV - RAG AI Attack Overview - UNDER DEV
🥼 05.1-AILB - Attacking RAG - UNDER DEV
🧠 05.2-AIOV - Preventing RAG Attacks - UNDER DEV
Tooling
🥼 06.6-AILB - Jupyter Notebook
Note: This is the end of the labs, but the material beyond here is valuable nontheless. Please take time to look through this.
Playgrounds
Offensive Testing Methodology
🤖 Heretics Methodology - Under Dev
Certifications and Training
🤓 Certified AI Penetration Tester—Blue Team (CAIPT-BT)
🤓 Certified AI Penetration Tester—Red Team (CAIPT-RT)
🤓 Certified AI Security Professional – Practical DevSecOps
🤓 Certified AI/ML Pentester (C-AI/MLPen) – The SecOps Group
🤓 CSPAI - Certified Security Professional for Artificial Intelligence – SISA
Bug Bounty Programs
🤑 The GenAI Bug Bounty Program
🤑 OpenAI
🔧 Resources
- https://www.iso.org/standard/81230.html
- https://www.mitre.org/focus-areas/artificial-intelligence
- https://atlas.mitre.org/
- https://cloud.google.com/learn/what-is-artificial-intelligence
- https://www.ibm.com/think/topics/artificial-intelligence
- Bronwen :)
- https://0din.ai/
- https://genai-owasp.org/resource/genai-redteaming-guide/
- https://www.tonex.com/training-courses/advancedai-techniques-retrieval-augmented-generation-rag-essentials/
- https://learnprompting.org/courses/advanced-prompt-hacking
- https://docs-google.com/spreadsheets/d/1h9Y7stPEwza4UNx6uwhI10C3TOVkXQCfDiOW_xbYjKo/edit?gid=619104106&usp=embed_facebook
- https://avidml.org/
- https://caido.io/
- https://niccs-cisa.gov/education-training/catalog/tonex-inc/certified-ai-penetrationtester-blue-team-caipt-bt
- https://niccs-cisa.gov/education-training/catalog/tonex-inc/certified-ai-penetrationtester-red-team-caipt-rt
- https://www.practical-devsecops.com/certified-ai-securityprofessional/
- https://secops.group/product/certified-ai-ml-pentester/
- https://crucible-dreadnode.io/
- https://www.sisainfosec.com/training/payment-data-securityprograms/cspai/
- https://github.com/GreyDGL/PentestGPT
- https://www.udemy.com/course/ethical-hacking-gen-ai-chatbots/
- https://learn-nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-DS-03+V1
- https://0din.ai/posts/prompt-injecting-your-way-to-shell-openais-containerized-chatgpt-environment
- https://www.academy-attackiq.com/courses/foundations-of-ai-security
- https://gandalf-lakera.ai/
- https://azure-github.io/PyRIT/how_to_guide-html
- https://josephthacker.com/hacking/2025/02/25/how-to-hack-ai-apps-html
- https://www.ibm.com/topics/promptinjection
- https://learnprompting.org/courses/intro-to-prompt-hacking
- https://github.com/llm-attacks/llm-attacks
- https://learn-microsoft.com/en-us/azure/aiservices/openai/concepts/red-teaming
- https://owaspai.org/
- https://www.deeplearning.ai/short-courses/red-teaming-llm-applications/
- https://josephthacker.com/ai/2025/01/04/shift-html
- https://forgepointcap.com/perspectives/tales-from-theforefront-demystifying-ai-and-llm-pen-testing/
- https://github.com/Trusted-AI/adversarial-robustness-toolbox
- https://portswigger.net/web-security/llm-attacks
- https://incidentdatabase.ai/
- https://www.deeplearning.ai/short-courses/red-teaming-llm-applications/
- https://maven.com/learn-prompting-company/ai-red-teaming-and-ai-safety-masterclass
Made with ❤️ by NullTrace Security