Module 8 Lesson 6: Local Tool Calling
·AI & LLMs

Module 8 Lesson 6: Local Tool Calling

Giving the AI hands. How to let local models run functions, check the weather, or query a database.

Local Tool Calling: Giving the AI a Body

One of the newest and most exciting features of Ollama is Tool Calling (also known as Function Calling). This allows a model to "decide" to call a Python function you wrote.

1. How it Works

  1. Definitions: You give the model a list of tools it can use (e.g., get_weather, query_database).
  2. Request: You ask the model: "Is it going to rain in New York?"
  3. The Decision: Instead of guessing the weather, the model returns a "Tool Call" JSON object.
  4. The Action: Your Python script sees the request, runs the actual weather API, and sends the result back to the model.
  5. The Answer: The model reads your API result and tells the user: "Yes, it will rain!"

2. Using Llama 3 for Tools

Llama 3 is one of the first small models that is truly reliable at choosing the right tool. To use it in Python with the official library:

import ollama

# 1. Define the tool
tools = [{
  'type': 'function',
  'function': {
    'name': 'get_weather',
    'description': 'Get the current weather in a location',
    'parameters': {
      'type': 'object',
      'properties': {
        'city': {'type': 'string', 'description': 'The city name'},
      },
      'required': ['city'],
    },
  },
}]

# 2. Call the model with the tool list
response = ollama.chat(
  model='llama3',
  messages=[{'role': 'user', 'content': 'What is the weather in NYC?'}],
  tools=tools,
)

# 3. Check for tool calls
print(response['message'].get('tool_calls'))

3. Why this is a Revolution for Local AI

Before Tool Calling, researchers had to "hack" the prompt to force the model to output a specific format like code. Now, it is native. You can build a local "Agent" that can:

  • Search your local files.
  • Control your smart home lights.
  • Run terminal commands (Use with caution!).

4. Limitations of Small Models

While Llama 3 (8B) is good, it can sometimes get confused if you give it too many tools (e.g., more than 5).

  • Tip: Only provide the tools that are relevant to the current conversation to keep the model focused and accurate.

Key Takeaways

  • Tool Calling allows models to interact with the real world via code.
  • Llama 3 and Mistral are the best local models for tool use.
  • The model doesn't "run" the function; it requests that your script runs it.
  • This is the fundamental building block of Autonomous AI Agents.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn