Module 1 Wrap-up: Foundations and Model Selection
Hands-on: Analyzing model suitability and understanding the serverless AI paradigm.
27 articles
Hands-on: Analyzing model suitability and understanding the serverless AI paradigm.
Hands-on: Design the mission statement and tool-set for your first Bedrock Agent.
Hands-on: Write a Lambda function and a schema for an agent that can track package deliveries.
Hands-on: Trace the execution of a multi-step agent and identify reasoning bottlenecks.
Hands-on: Design the architecture for a multi-agent system that handles a complex business workflow.
Hands-on: Design a human-in-the-loop workflow for a high-value financial transaction agent.
Hands-on: Create a CloudWatch dashboard that tracks your agent's success rate and token spend.
Hands-on: Design a secure architecture that involves IAM, Secrets Manager, and Guardrails.
Hands-on: Design a deployment strategy for a mission-critical AI agent.
Hands-on: Identify a business workflow that requires the control and state management of AgentCore.
Hands-on: Design a multi-node AgentCore graph that includes AI reasoning and data validation.
Hands-on: Verify your AWS configuration and enable your first foundation models.
Hands-on: Design an AgentCore workflow that uses a critic node to verify RAG output.
Hands-on: Write a script that compares the outputs of two different models using the same prompt.
Hands-on: Robust prompt engineering to reduce hallucinations and maximize cost-efficiency.
Hands-on: Build a streaming CLI chat application that handles tokens as they arrive.
Hands-on: Build a functional REST API with FastAPI that exposes multiple Bedrock models.
Hands-on: Design your first Knowledge Base and select your chunking strategy.
Hands-on: Build a Python script that answers questions about your S3 documents and prints the citations.
Hands-on: Implement a Bedrock Guardrail and verify your grounding instructions.
Reviewing the AI landscape and testing your ability to distinguish between different AI types.
Reviewing the mechanics of LLMs and conducting a comparative model experiment.
Reviewing professional prompting techniques and completing a structured output project.
Reviewing non-text GenAI and practicing image generation and code refactoring.
Reviewing the AI tech stack and building a simple RAG pipeline and local AI setup.
Finalizing the course with a discussion on ethics and an outlook on the AI co-pilot era.
Reviewing model abstractions, streaming, and batching. Your final check before moving to Prompts.