Module 12 Wrap-up: Inspecting the Machine
·LangChain

Module 12 Wrap-up: Inspecting the Machine

Hands-on: Build a Cost-Monitoring Callback that calculates and prints the price of every AI request.

Module 12 Wrap-up: The Watchtower

You have moved beyond "Building" and into "Management." By using Callbacks, you have turned your AI from a mysterious black box into a transparent, observable system. You can now track costs, log errors, and provide a much better user experience with real-time feedback.


Hands-on Exercise: The Token Watcher

1. The Goal

Create a Custom Callback Handler that:

  1. Watches for the on_llm_end event.
  2. Extracts the token_usage from the response metadata.
  3. Prints: "This request cost {Input} input tokens and {Output} output tokens."

2. The Implementation Plan

  • Inherit from BaseCallbackHandler.
  • Use the response.llm_output['token_usage'] dictionary.
  • Attach the handler to a simple model.invoke() call.

Module 12 Summary

  • Events: The internal heartbeat of LangChain (Start, End, Error).
  • Handlers: Custom classes that listen for specific events.
  • Propagation: How callbacks passed to a chain reach the models inside.
  • Logging: Using StdOut for debugging and custom handlers for DB logging.
  • UX: Using callbacks to update spinners and progress bars in frontends.

Coming Up Next...

In Module 13, we level up our management skills with LangSmith. We will move from local terminal logs to a professional Cloud Console for debugging, testing, and monitoring AI agents at scale.


Module 12 Checklist

  • I can list at least 3 types of lifecycle events.
  • I have created a custom class that inherits from BaseCallbackHandler.
  • I understand the difference between passing a callback to a model vs. a chain.
  • I have used verbose=True to see the "Green Text" thoughts of an agent.
  • I understand how callbacks allow me to build better UIs.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn