Module 7 Wrap-up: Designing the Library
·AWS Bedrock

Module 7 Wrap-up: Designing the Library

Hands-on: Design your first Knowledge Base and select your chunking strategy.

Module 7 Wrap-up: The Knowledge Architect

You have learned that for AI to be useful in a company, it must be Grounded. You understand the RAG architecture and how Bedrock Knowledge Bases turn a static S3 bucket into a dynamic, searchable library of "Meaning."


Hands-on Exercise: The Ingestion Plan

1. The Scenario

You are building an AI assistant for a HR department. They have 100 PDFs containing employee handbooks and health insurance details.

2. The Task

  1. Which Vector Store would you choose? (Recommendation: OpenSearch Serverless for ease of use).
  2. What Chunking Strategy? (Recommendation: 400 tokens with 80-token overlap).
  3. What Embedding Model? (Recommendation: Titan Embeddings G1 - Text).

3. Setup Step

Go to the AWS Console, Search for S3, and create a bucket named my-company-hr-docs-[your-unique-id]. Upload a sample PDF. This is the foundation of your Knowledge Base.


Module 7 Summary

  • RAG: The standard architecture for private AI.
  • Knowledge Bases: The managed service that handles RAG on AWS.
  • Chunking: Breaking docs into meaningful pieces.
  • Embeddings: Turning pieces into searchable vectors.
  • Vector DBs: Storing and retrieving context based on "Meaning similarity."

Coming Up Next...

In Module 8, we connect our code to the Knowledge Base. We will learn how to use the Retrieve and RetrieveAndGenerate APIs to turn user questions into grounded, factual answers with citations.


Module 7 Checklist

  • I can explain what RAG stands for.
  • I have created an S3 bucket for my AI data.
  • I understand the purpose of overlapping chunks.
  • I know that embeddings are mathematical vectors.
  • I have identified the cost components of a KB (S3 + Vector DB + Embeddings).

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn