Module 16 Wrap-up: The Security Audit
Hands-on: Design a secure architecture that involves IAM, Secrets Manager, and Guardrails.
Module 16 Wrap-up: The Cyber Guardian
You have reached the peak of AI Security. You know that an agent is not just a "Cool Feature"—it is a potential entry point for attackers. You have learned how to use IAM Isolation, Secrets Management, and Red Teaming to ensure that your "Autonomous Assistant" doesn't become an "Autonomous Hacker."
Hands-on Exercise: The Security Review
1. The Scenario
You have an agent that can "Update Customer Email Addresses" in a SQL database.
2. The Task
Identify 3 security risks in the architecture and suggest a fix for each.
- Risk 1: The SQL password is hardcoded in the Lambda. (Fix: Secrets Manager).
- Risk 2: A user could say "Update email for EVERYONE to attacker@evil.com." (Fix: Constraints in Lambda to only update 1 ID at a time).
- Risk 3: The IAM role allows
DELETE * FROM customers. (Fix: Least Privilege - only allowUPDATE).
Module 16 Summary
- IAM Execution Roles: Limiting the "Hands" of the agent.
- Secrets Manager: Protecting external API keys.
- Prompt Injection: The core threat to LLM-driven logic.
- Structural Separation: Using the API correctly to prevent jailbreaks.
- Red Teaming: Proactive testing to uncover vulnerabilities.
Coming Up Next...
In Module 17, we prepare for Scale. We will learn about Rate Limiting, Retry Strategies, and how to manage multiple Versions of your prompts and agents in a professional production environment.
Module 16 Checklist
- I can describe the "Principle of Least Privilege" for Bedrock.
- I have identified where to store external API keys (Secrets Manager).
- I understand the difference between direct and indirect prompt injection.
- I have practiced "Red Teaming" my own agent.
- I know how to check my agent's execution role in the IAM console.