Module 1 Lesson 3: Installing Model Providers
·LangChain

Module 1 Lesson 3: Installing Model Providers

Connecting to the brains. How to install specialized packages for OpenAI, Anthropic, and local model providers.

Connecting to Model Providers

LangChain is the "Orchestrator," but it needs a "Brain" to function. These brains come from Model Providers like OpenAI, Google, or Anthropic. To keep the core library small, LangChain puts these connections into separate packages.

1. OpenAI (The Industry Standard)

Most tutorials and enterprise projects start with OpenAI.

pip install langchain-openai

2. Anthropic (For Large Contexts)

If you prefer Claude 3, you need the Anthropic package.

pip install langchain-anthropic

3. Local Providers (For Privacy)

If you want to run models on your own machine (Sovereign AI), you might use the Ollama or LlamaCpp integration.

pip install langchain-community

(Note: Most local integrations live inside the langchain-community package).


4. Why Separate Packages?

By splitting the providers, LangChain ensures that if you are only using OpenAI, you don't have to download 500MB of Google Cloud dependencies. This results in faster deployments and smaller Docker images.


5. Visualizing the Connection

graph LR
    LC[LangChain Code] --> P[Provider Package: langchain-openai]
    P --> API[Remote API: api.openai.com]
    API --> Result[Text Response]
    Result --> LC

Key Takeaways

  • Model providers are installed via specific integration packages.
  • langchain-openai is the most common starting point.
  • Separation of packages improves deployment efficiency.
  • You only need to install the package for the model you plan to use.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn