Module 8 Wrap-up: Building Your Python Chatbot
·AI & LLMs

Module 8 Wrap-up: Building Your Python Chatbot

Hands-on: Creating a fully functional, streaming terminal chatbot using Python and Ollama.

Module 8 Wrap-up: From Terminal to Script

You have mastered the API, the official libraries, and the basics of LangChain. Now, let’s bring it all together. We are going to build a Python script that acts as a "Security Auditor" bot that streams its thinking to your console.


Hands-on Exercise: The Python Security Auditor

1. The Setup

Ensure you have the ollama library installed: pip install ollama

2. The Code

Create a file named auditor.py:

import ollama

def audit_code(code_snippet):
    # We use a custom system prompt to set the persona
    messages = [
        {'role': 'system', 'content': 'You are a Senior Security Auditor. Identify vulnerabilities in the following code. be brief.'},
        {'role': 'user', 'content': code_snippet}
    ]

    print("--- Audit Beginning ---\n")
    
    # Use the streaming feature for a better UX
    stream = ollama.chat(model='llama3', messages=messages, stream=True)
    
    for chunk in stream:
        content = chunk['message']['content']
        print(content, end='', flush=True)

    print("\n\n--- Audit Complete ---")

# A test snippet with a clear SQL injection vulnerability
test_code = "user_input = input(); cursor.execute('SELECT * FROM users WHERE name = ' + user_input)"

audit_code(test_code)

3. Run and Observe

python auditor.py

Watch how the model identifies the SQL injection and explains why it's dangerous, all while streaming the words to your screen in real-time.


Module 8 Summary

  • The REST API makes Ollama accessible from any language.
  • The Official Python & JS libraries handle the heavy lifting of HTTP and streaming.
  • LangChain allows for modular, swappable AI workflows.
  • Tool Calling gives local models the ability to execute code and query external data.

Coming Up Next...

In Module 9, we move into "Prompt Engineering 201." We will learn how to build Guardrails for local models to ensure they stay on topic and provide safe, structured outputs.


Module 8 Checklist

  • I have successfully called the Ollama API using curl.
  • I have installed the ollama Python or JavaScript library.
  • I can explain the difference between generate and chat.
  • I have built a streaming terminal application.
  • I understand how LangChain connects to local models.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn