With our Ethical Hacking and Penetration Testing Artificial Intelligence (AI) and Large Language Models (LLM) Training course you will become professional in AI and LLM Pentesting and Vulnerability Discovery. This course has a both theory and practical lab sections with a focus on finding and exploiting vulnerabilities in AI and LLM systems and applications. The training is aligned with the OWASP Top 10 LLM vulnerability classes.
Duration
2 days
The Instructor
Your instructor is Martin Voelk. He is a Cyber Security veteran with 25 years of experience. Martin holds some of the highest certification incl. CISSP, OSCP, OSWP, Portswigger BSCP, CCIE, PCI ISA and PCIP. He works as a consultant for a big tech company and engages in Bug Bounty programs where he found hundreds of critical and high vulnerabilities.
Prerequisites
Basic IT Skills
Basic understanding of web technology
No Linux, programming or hacking knowledge required
Computer with a minimum of 4GB ram/memory
Operating System: Windows / Apple Mac OS / Linux
Reliable internet connection
Firefox Web Browser
Burp Suite (optional)
Course Content
AI/LLM Introduction
What is AII and LLM?
Learning process
Application and the future
Development cycle
Tokenization
AI/LLM Attacks Overview
Misalignment
Jailbreaks
Prompt Injections
Forgery
Exfiltration
AI/LLM Frameworks / writeups
OSWASP Top 10 LLM
LLM Attacks
PIPE
MITRE
AI LLM01: Prompt Injection
Direct
Indirect
Jailbreaks and Bypasses
Data Exfiltration
Prompt Leaking
Playgrounds
2 x Labs
AI LLM02: Insecure Output Handling
XSS
CSRF
SSRF
1 x Lab
AI LLM03: Training Data Poisoning
LLM internal datasets
Public datasets
Targeted vs. general poisoning
AI LLM04: Model Denial of Service
High CPU/Memory tasks
High Network utilization tasks
Lack of Rate Limiting
Traditional DoS
AI LLM05: Supply Chain Vulnerabilities
Out of date software, libraries
3rd party dependencies
Poisoned packages
AI LLM06: Sensitive Information Disclosure
PII, PHI, financial data
System data
Access Control and Encryption
Sensitive info in training data
AI LLM07: Insecure Plugin Design
Plugin vulnerabilities
Access Control and Authorization
Plugins chaining
Having plugins perform actions
Plugin based injection
AI LLM08: Excessive Agency
Excessive privileges
Lack of oversight
Interaction with backend APIs
Incorrect permissions
2 x Labs
AI LLM09: Overreliance
Too much dependency on AI output
Lack of backup systems
False information output
Misclassification
Hallucinations
AI LLM10: Model Theft
Reverse engineering
Lack of Authentication / Authorization
Code access
Model replication
Model extraction
Next Date
Thursday October 10, 2024 – Friday October 11, 2024
10:00am – 06:00pm Central Time