Module 13 Lesson 2: LangSmith Overview
·LangChain

Module 13 Lesson 2: LangSmith Overview

The Cloud Control Room. Introduction to LangSmith, the enterprise platform for tracing and optimizing your AI applications.

LangSmith: The Professional Grade Dashboard

While set_debug (Lesson 1) is great for your terminal, you cannot use it in production. You need a way to see what happened to User X's request yesterday at 3:00 PM. LangSmith is a cloud platform built by the LangChain team to solve this.

1. Traceability

Every time your code runs, LangSmith creates a "Trace." A trace is a visual timeline of every step in your chain.

  • You can click on a step and see the Exact Prompt that was sent.
  • You can see how much each step cost in dollars.
  • You can see the latency (how many ms each step took).

2. Setting it up

  1. Create an account at smith.langchain.com.
  2. Add your API Key to your .env file.
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="your-api-key"
LANGCHAIN_PROJECT="my-cool-agent"

Once these are in your environment, LangChain will automatically start sending traces. You don't need to change a single line of your Python code!


3. Visualizing a LangSmith Trace

gantt
    title Agent Execution Timeline
    dateFormat  X
    section Chain
    Total Execution :0, 50
    section Steps
    Prompt Construction :0, 5
    LLM (GPT-4o) Request :5, 30
    Tool (Search) Execution :30, 45
    Final Synthesis :45, 50

4. The "Playground" Feature

If a prompt from yesterday failed, you can open that specific trace in LangSmith and click "Open in Playground." You can edit the prompt and "Re-run" it right there in the browser to find a fix without touching your code.


5. Engineering Tip: Privacy

In LangSmith, you are sending your user's data to the cloud. For highly sensitive enterprise data (legal/health), you may want to use Sovereign Observability (Module 13 of the Agentic course) where logs stay on your own servers.


Key Takeaways

  • LangSmith provides a visual UI for debugging and monitoring.
  • It tracks Input, Output, Latency, and Cost.
  • Setup is Zero-Code via environment variables.
  • The Playground enables rapid prompt iteration on failing real-world cases.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn