Module 10 Lesson 2: Connecting to Tools (Function Calling)
·Artificial Intelligence

Module 10 Lesson 2: Connecting to Tools (Function Calling)

LLMs are smart, but they can't browse the web or calculate math perfectly by themselves. In this lesson, we learn about Function Calling—how LLMs use external tools to get the job done.

Module 10 Lesson 2: Connecting to Tools (Function Calling)

As we've learned, LLMs are excellent at predicting the next word, but they are "enclosed" inside their model weights. They can't actually check your bank balance, send an email, or compute root(456.23) with 100% accuracy.

To solve this, we give the model Tools. This process is known as Function Calling.

In this lesson, we will see how an LLM acts as the "Brain" that decides when to put down the pen and pick up a calculator.


1. How Function Calling Works

An LLM doesn't actually "run" the code in your database. Instead, it generates a specifically formatted piece of text (usually JSON) that tells your application which function to run.

The Workflow:

  1. Preparation: You tell the AI about your tools (e.g., "I have a tool called get_weather that takes a city name").
  2. Trigger: The user asks: "Is it raining in London?"
  3. The Call: The AI realizes it doesn't know the weather. It outputs: {"tool": "get_weather", "location": "London"}.
  4. Execution: Your app sees this JSON, runs the real weather API, and gets the result: "Yes, it's raining."
  5. Final Response: Your app sends the result back to the AI. The AI then says to the user: "Yes, it's currently raining in London."

2. The Logic of Choice

The magic of modern LLMs is their ability to chose the right tool for the right job.

  • If the user asks for a joke, the AI responds normally.
  • If the user asks for a flight price, the AI calls the search_flights tool.
graph TD
    User["User Question"] --> LLM["LLM Router"]
    LLM -- "Conversational" --> Chat["Normal Chat Response"]
    LLM -- "Action Needed" --> Tool["Generate JSON Tool Call"]
    Tool --> App["System Executes Tool (Code, API, DB)"]
    App --> Feedback["Result returned to LLM"]
    Feedback --> Final["Final Answer to User"]

3. Why it’s more reliable than RAG alone

While RAG (Module 7) provides context, Function Calling provides Capabilities.

  • RAG: The AI reads a document about your company's refund policy.
  • Function Calling: The AI actually triggers the refund in your payment system.

This turns the LLM from a "Passive Reader" into an Agent that can change the world.


4. The Risks of Tool Use

Giving an AI tools is powerful but dangerous. If you give an AI a delete_user tool, you must ensure that its safety filters (Module 8) are robust enough so that a user can't "jailbreak" the AI into deleting accounts.


Lesson Exercise

Goal: Model a "Tool Router."

Imagine you are building an AI for a smart home. You have three tools:

  • light_control(room, status)
  • play_music(genre)
  • order_pizza(toppings)
  1. The user says: "It's too dark in here, and I'm hungry for pepperoni."
  2. Which tools should the AI generate calls for?
  3. What should the JSON look like for the light_control call?

Observation: You've just designed the "Agentic Brain" of a smart home!


Summary

In this lesson, we established:

  • Function Calling allows LLMs to trigger external code and APIs.
  • The LLM acts as the router, deciding which tool to use based on user intent.
  • This workflow turns LLMs into active "Agents" rather than just text predictors.

Next Lesson: We wrap up Module 10 by putting it all together. We'll learn how to build a Simple Workflow that combines prompts, RAG, and tools into a single app.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn