Module 2 Wrap-up: Getting Ready for Code
Hands-on: Verify your AWS configuration and enable your first foundation models.
Module 2 Wrap-up: Verified and Ready
You have configured the "Pipes" of your AI infrastructure. You understand that access is a two-step dance:
- Console Access (Agreement with the Provider).
- IAM Policy (Technical permission to call the API).
Hands-on Exercise: The Access Check
1. Enable Claude
Go to the Bedrock console (US-East-1) and request access to Claude 3.5 Sonnet. Verify that it says "Access granted."
2. AWS CLI Verification
Open your terminal. If you have the AWS CLI configured, run:
aws bedrock list-foundation-models --region us-east-1 --query 'modelSummaries[?modelId==`anthropic.claude-3-sonnet-20240229-v1:0`].modelId'
If you see the model ID in the output, your configuration is Correct.
Module 2 Summary
- IAM Policies: The primary way to restrict model usage to specific apps.
- Model Access: A manual console step required for legal compliance.
- Regions: Choose based on model availability (US-East-1) or performance (Local).
- Least Privilege: Avoid
*in your resource policies.
Coming Up Next...
In Module 3, we write our first lines of Python. We will learn how to use boto3 to call the Bedrock Runtime API and look at the difference between the legacy InvokeModel and the modern Converse API.
Module 2 Checklist
- I have requested model access in the AWS Console.
- I can explain why
bedrock:InvokeModelis needed. - I have chosen a primary AWS Region (e.g., us-east-1).
- I have the AWS CLI installed and configured.
- I understand the difference between a Managed Policy and a Custom Policy.