Module 2 Lesson 5: Provider-Agnostic Setup
·LangChain

Module 2 Lesson 5: Provider-Agnostic Setup

Hands-on: Implementing a model switcher that allows you to change the 'Brain' of your app via a simple configuration.

Agnostic Engineering: The Model Swapper

In this lesson, we put "Abstraction" into practice. We will build a simple function that can return either an OpenAI model or an Anthropic model based on a string name. This is the Professional Way to build AI apps—never locking yourself into one vendor.

1. The Multi-Provider Script

Ensure you have both providers installed: pip install langchain-openai langchain-anthropic

from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic

load_dotenv()

def get_model(provider="openai"):
    if provider == "openai":
        return ChatOpenAI(model="gpt-4o-mini")
    elif provider == "anthropic":
        return ChatAnthropic(model="claude-3-haiku-20240307")
    else:
        raise ValueError("Unsupported provider")

# Test switching
for p in ["openai", "anthropic"]:
    model = get_model(p)
    print(f"--- Testing {p} ---")
    print(model.invoke("Who are you?").content)

2. Why "Agnosticism" reduces Cost

During development, you might use Claude 3 Haiku or GPT-4o Mini (cheap). When you go to production for high-value tasks, you switch the config to GPT-4o or Claude 3.5 Sonnet (expensive). By using the LangChain abstraction, this change takes 2 seconds instead of 2 hours.


3. Standardizing Parameters

Even though providers use different names (e.g., Anthropic uses max_tokens and some older OpenAI models used max_tokens_to_sample), LangChain attempts to normalize these parameters.

# This parameter name will generally work across providers in LangChain
model = ChatOpenAI(model="...", temperature=0.7)

4. Visualizing the Switcher

graph TD
    Config[Config: provider='anthropic'] --> Logic[Factory Function]
    Logic -->|Choice| A[Anthropic Driver]
    Logic -->|Choice| O[OpenAI Driver]
    A --> API[External Service]
    O --> API

Key Takeaways

  • Factory Functions are the best pattern for loading models.
  • Provider modularity allows for rapid cost optimization.
  • LangChain normalizes parameters like temperature and max tokens.
  • Always test your app's logic with at least two different models to ensure it doesn't "break" on one model's quirks.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn