
Capstone Project: Designing 'TrustBank AI'
Apply everything you've learned. You will design a secure, compliant, RAG-powered GenAI banking assistant. We provide the architecture diagram and the defense strategy.
The Challenge
You are the newly appointed Chief AI Officer of "TrustBank," a mid-sized financial institution.
The Mission: Launch a customer-facing chatbot ("TrustBot") that can:
- Answer questions about checking accounts (Public Data).
- Tell a user their current balance (Private Data).
- Recommend investment products based on their history (Advisory).
The Constraints:
- Security: Banking data must never leak.
- Compliance: You cannot give bad financial advice (Hallucinations).
- Cost: Keep token costs low.
Phase 1: Architecture Design
We need a Hybrid RAG + Agent architecture.
graph TD
User[Customer] -->|Login| App[TrustBank App]
App -->|Prompt| Orchestrator[Vertex AI Agent]
subgraph "Public Knowledge (RAG)"
Orchestrator -->|Question about Fees?| Search[Vertex AI Search]
Search -->|Retrieve PDF| Docs[Product Manuals]
end
subgraph "Private Data (Tools)"
Orchestrator -->|What is my balance?| Tool[SQL Tool]
Tool -->|Query| DB[(Secure Core Banking DB)]
end
Orchestrator -->|Draft Answer| Safety[Safety Filter / Guardrail]
Safety -->|Approved| User
style DB fill:#EA4335,stroke:#fff,stroke-width:2px,color:#fff
style Docs fill:#34A853,stroke:#fff,stroke-width:2px,color:#fff
style Orchestrator fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
Strategic Decisions
- Why Vertex AI Search? For the "Public Data" (PDFs of account types). We don't need to build a vector DB from scratch.
- Why Agents/Tools? For the "Private Data." The model cannot read the database directly (too risky). It must use a defined API tool (
get_balance(user_id)) that enforces strict ACLs. - Why Guardrails? To prevent the bot from giving investment advice like "Buy Crypto now!"
Phase 2: Feature Implementation
1. The "Balance Check" Prompt (Prompt Engineering)
We need a System Instruction to define the persona.
system_instruction = """
You are TrustBot, a helpful banking assistant.
Tone: Professional, Concise, Secure.
Rules:
1. NEVER reveal a user's password.
2. If asked for investment advice, say: 'I cannot provide financial advice. Please speak to an advisor.'
3. Only use the 'get_balance' tool if the user is authenticated.
"""
2. The "Hallucination Defense" (Grounding)
We will enable Grounding with Enterprise Data.
- If the user asks "What is your overdraft fee?", the model MUST cite the "2026 Fee Schedule PDF."
- If it can't find the PDF, it says "I don't know," rather than guessing "$5".
3. The "Cost Control" (Model Selection)
- Question: Should we use Gemini Ultra or Gemini Flash?
- Decision: Gemini Flash.
- Reasoning: Checking a balance is a simple task. Flash is 10x cheaper and faster. We don't need the reasoning power of Ultra for this.
Phase 3: Governance & Risk
EU AI Act Classification
- System Type: Customer-facing banking AI.
- Risk Level: High Risk (Credit/Financial impact).
- Obligations:
- Human Oversight (Logs must be reviewed).
- Accuracy/Robustness tests (Red Teaming).
- Transparency (Bot must say "I am an AI").
Data Residency
TrustBank operates in Germany.
- Action: We configure the Vertex AI Client with
location="europe-west3". - Result: Customer financial data never leaves the EU.
Phase 4: Assessment (The Final Quiz)
Passed by the Board of Directors?
- Did we prevent training on customer data? Yes, by using Vertex AI (Enterprise), not the public chatbot.
- Did we solve the hallucination problem? Yes, by using RAG and Grounding.
- Did we minimize cost? Yes, by using Gemini Flash.
Conclusion
You have successfully designed a compliant, high-value AI system. This Capstone demonstrates that being a Generative AI Leader isn't about writing code—it's about making the right architectural, financial, and ethical choices to build systems that work.
Good luck on your exam!
Knowledge Check
?Knowledge Check
In the TrustBank scenario, why did we choose to use a 'Tool' (API) to access the Core Banking Database instead of feeding the database schema into the LLM's context window?