Module 1 Lesson 5: First LangChain Run
The Hello World of AI. Initializing your first chat model and making a successful invocation.
Your First LangChain App
You have your environment, your library, and your API key. Now it's time to run the "Hello World" of the agentic age. We will initialize a chat model and ask it a simple question.
1. The Code
Create a file named hello_langchain.py:
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
# 1. Load your API key
load_dotenv()
# 2. Initialize the model
# We set temperature=0 for consistent, non-creative results
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# 3. Invoke the model
response = model.invoke("Hello! What are you capable of?")
# 4. Print the result
print(response.content)
2. Breaking Down the Parameters
model="gpt-4o-mini": We use the 'mini' size because it is faster and cheaper for learning.temperature=0: This controls "Randomness." 0 means the model will give the same answer to the same question every time. 1 means it will be very creative..invoke(): This is the standard method to send a message to any LangChain component.
3. The Response Object
In LangChain, a model doesn't just return a string of text. It returns an AIMessage object.
response.content: The actual text.response.response_metadata: Information about token usage and model flags.
4. Troubleshooting Common Errors
"Authentication Error"
- Cause: Your API key is wrong or your
.envfile isn't being loaded. - Fix: Check that
.envis in the same folder as your script.
"Model Not Found"
- Cause: You misspelled the model name.
- Fix: Use standard names like
gpt-4o-miniorgpt-3.5-turbo.
Key Takeaways
- Initialization requires a model name and temperature.
invoke()is the primary way to interact with models.- The model returns a Response Object, not just text.
- Understanding the Temperature setting is the first step in control.