
Module 3 Lesson 5: AI Risk Prioritization
Not all threats are equal. Learn how to use the 'Likelihood vs. Impact' matrix to prioritize AI security risks and manage your resource allocation effectively.
Module 3 Lesson 5: Risk prioritization
In a production environment, you cannot fix everything at once. You must decide which AI risks are "Critical" and which are "Informational."
quadrantChart
title AI Risk Prioritization Matrix
x-axis Low Likelihood --> High Likelihood
y-axis Low Impact --> High Impact
quadrant-1 "High Priority: Monitoring"
quadrant-2 "CRITICAL: Hardened Guardrails"
quadrant-3 "Low Priority: Awareness"
quadrant-4 "Strategic: Design Changes"
"Prompt Injection": [0.9, 0.7]
"Data Poisoning": [0.2, 0.9]
"Model Extraction": [0.5, 0.5]
"Rate Limiting Bypass": [0.8, 0.4]
"PII Leakage": [0.4, 0.8]
1. The Likelihood x Impact Matrix
We use the standard risk matrix but with a "Probabilistic" twist.
| Threat Type | Likelihood | Impact | Priority |
|---|---|---|---|
| Prompt Injection | Very High | Medium | CRITICAL |
| Data Poisoning | Low | Very High | High |
| Model Extraction | Medium | Medium | Medium |
| Hallucination | Very High | Low/High | Variable |
2. Defining "Impact" in AI
In traditional security, impact is "Server Down" or "Data Stolen." In AI, we add:
- Legal Impact: The AI agrees to a binding contract that costs the company $1,000,000.
- Reputational Impact: The AI says something toxic that goes viral on social media.
- Safety Impact: The AI gives a user dangerous advice that leads to physical harm.
3. The "Unstoppable" Risk
Some risks (like Prompt Injection) are currently unsolvable in a perfect way.
- Strategy: Instead of trying to "Prevent" the risk, you prioritize Mitigation and Monitoring.
- Example: You accept that prompt injection will happen, so you prioritize "Rate Limiting" and "Human-in-the-Loop" to reduce the damage.
4. Prioritizing by "Agent Power"
A rule of thumb for prioritization: The more "Tools" an AI has, the higher the security priority.
- Low Priority: A simple bot that summarizes public news articles.
- High Priority: An AI agent that has access to your production database and your customers' credit card info.
5. Scoring with "DREAD"
The DREAD model helps provide a numerical score (1-10) for each threat:
- Damage: How bad is the win?
- Reproducibility: How easy is it to repeat?
- Exploitability: How hard is the attack?
- Affected users: Who is impacted?
- Discoverability: How easy is it to find?
Exercise: The Priority Planner
- You have a budget to fix one security issue this month. Which do you pick for an AI Medical Assistant:
- A) Occasional hallucinations about vitamin dosages.
- B) A prompt injection that reveals the model's internal codename.
- C) A potential data leak where the model might "remember" a previous patient's name.
- Justify your answer.
- Why is "Prompt Injection" prioritized as "Critical" even though its "Impact" is often just "Funny text"?
- Look up the CVSS (Common Vulnerability Scoring System). Why is it difficult to apply CVSS scores to AI vulnerabilities?
- Research: What is "Residual Risk" and why is it always high in LLM applications?
Summary
You have completed Module 3: Threat Modeling for AI Systems. You now have the framework to look at any AI architecture, identify the "STRIDE" threats, and prioritize your defense engineering where it matters most.
Next Module: The Core Risk: Module 4: Data Security and Data Poisoning.