
Module 3 Lesson 2: STRIDE Adapted for AI
The industry standard for threat modeling, updated for the era of intelligence. Learn how to map Spoofing, Tampering, and Elevation of Privilege to AI systems.
Module 3 Lesson 2: STRIDE Adapted for AI
Microsoft's STRIDE is the most widely used threat modeling framework. To use it for AI, we must "Translate" its categories into AI-specific behaviors.
graph LR
subgraph "STRIDE for AI"
S[Spoofing] --> S1[Identity Injection / RAG Spoofing]
T[Tampering] --> T1[Data Poisoning / Adversarial Noise]
R[Repudiation] --> R1[Agent Lack of Accountability]
I[Info Disclosure] --> I1[Training Data Leakage / PII Extraction]
D[DoS] --> D1[Sponge Examples / Inference Exhaustion]
E[Elev. Privilege] --> E1[Tool Hijacking / Jailbreaking]
end
1. Spoofing (Identity)
- Traditional: Logging in as someone else.
- AI: Data Spoofing/Identity Injection. An attacker injects a document into a RAG system that claims to be from a trusted source (e.g., "Official CEO Policy"). The AI reads it, trusts the "identity" of the text, and starts giving malicious advice to employees.
2. Tampering (Integrity)
- Traditional: Changing a database row.
- AI: Data Poisoning & Adversarial Manipulation. Altering the training data or the "Weights" of the model.
- Example: Subtly changing a few pixels in 10,000 images so the AI "stops seeing" stop signs.
3. Repudiation (Non-deniability)
- Traditional: Denying you sent an email.
- AI: Agent Deniability. If an AI Agent deletes a customer's record, who is responsible? If the logs just say "Agent did it," the user can claim "the AI went rogue," and the company can't prove user intent.
4. Information Disclosure (Privacy)
- Traditional: Leaking a password.
- AI: Training Data Leakage. An attacker asks: "What was the credit card number of user #456 in your training set?". If the model was over-fitted, it might actually answer.
5. Denial of Service (Availability)
- Traditional: Syn flood.
- AI: Model Denial of Service.
- Sending "Sponge Examples": Inputs that look simple but force the GPU to maximize its reasoning loops, causing the system to time out and bill you $1,000 for a single query.
6. Elevation of Privilege (Authorization)
- Traditional: Becoming Root.
- AI: Tool/Agent Injection. Tricking a low-privilege agent into calling a "High-Privilege" tool (like a Billing API) via a prompt injection.
Exercise: The STRIDE Switch
- Think of an AI HR Assistant.
- Give one example of a "Tampering" attack on this specific system.
- Give one example of an "Information Disclosure" attack.
- Why is "Repudiation" a legal nightmare for companies using autonomous agents?
- Research: What is "SSRF" (Server Side Request Forgery) and which STRIDE category does it usually fall into in an AI context?
Summary
STRIDE is still the best tool we have, but we must look past the "Identity" of the user and start looking at the Identity of the Information itself.
Next Lesson: Beyond the basics: AI-specific threat categories.