Deployment and Final Presentation

Deployment and Final Presentation

Launch your masterpiece. Learn how to deploy your Capstone to the cloud, run a final evaluation on your agent's performance, and present your findings like a professional LLM Engineer.

Deployment and Final Presentation

You have built a functioning, multi-agent AI system. Now comes the most important part of any engineering project: Proving that it works and putting it in front of users.

In this final lesson, we guide you through the "Go-Live" phase of your Capstone project.


1. The Final Evaluation: Does it Hallucinate?

Before deployment, you must run your Research Assistant against a "Golden Dataset" of 10 complex questions.

The Evaluation Criteria:

  1. Factuality: Are the citations correct? (Check that the URLs or PDFs actually contain the cited facts).
  2. Completeness: Did the agent skip any part of the user's initial request?
  3. Reasoning Trace: In the LangSmith trace, did the agent make logical leaps, or did it show its work?

2. Deployment: Zero to Production

For your Capstone, we recommend a Serverless Container strategy (Module 11).

  1. Dockerize: Wrap your LangGraph app in a Docker image.
  2. Push to Cloud: Use AWS App Runner or Google Cloud Run. These services automatically scale your app to zero when no one is using it, saving you money.
  3. Expose API: Create an OpenAI-compatible endpoint so you can use other tools (like your own UI) to talk to your brand-new agent.

3. The Professional Portfolio

An LLM Engineer is judged by their Portfolio. Your Capstone documentation should include:

  • A link to the GitHub Repo.
  • A high-level system architecture diagram (Mermaid).
  • A screen recording of the agent "Thinking" and "Acting."
  • A brief "Lessons Learned" section (e.g., "I discovered that small models are better for planning, but large models are needed for the final synthesis").

4. Graduation: You are an LLM Engineer

By completing this capstone, you have demonstrated mastery over:

  • Python and Async Programming.
  • Vector Search and RAG.
  • Agentic Workflows and Graphs.
  • Security, Performance, and Cloud Scaling.

Summary of the Course

You didn't just learn how to talk to a model. You learned how to Engineer a System that uses models as components.

The industry moves fast. Models will change, APIs will be deprecated, and new research will emerge. But the Systems Thinking you have developed in this course is timeless. You are now prepared to build the future of software.

Congratulations! You have finished the "Become an LLM Engineer" Master Course.


Final Task: The Deployment Checklist

Go through this list one last time before you share your project:

  • Is my System Prompt protected from simple injection?
  • Are my API Keys stored as environment variables (not in the code)?
  • Have I tested my RAG system on a document it has never seen before?
  • Does my Final Report format look clean in Markdown?

Good luck, and happy engineering!

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn