Module 2 Wrap-up: Your First Local Chat
·AI & LLMs

Module 2 Wrap-up: Your First Local Chat

Hands-on session: Pulling your first model and having a high-speed conversation with a local AI.

Module 2 Wrap-up: It’s Alive!

You have installed Ollama, explored the architecture, and learned the basic commands. It’s time for the most exciting part of the course: Running your first model.


Hands-on Exercise: The "Llama 3" Challenge

We are going to pull and run Meta's Llama 3 (8B) model. It is currently one of the best-performing models for its size.

Step 1: Open your Terminal

  • Windows: PowerShell or Command Prompt.
  • macOS: Terminal or iTerm2.
  • Linux: Your default shell.

Step 2: Download and Run

Type the following command and press Enter:

ollama run llama3

Step 3: Observe the Download

Ollama will show you a progress bar. Llama 3 is roughly 4.7 GB. Depending on your internet speed, this could take 1 to 10 minutes.

Step 4: The First Question

Once the download is complete, you will see a >>> prompt. Type: "Tell me a joke about a llama who loves coding."


What to Watch For

  1. Speed: Is the text appearing instantly? Or is it stuttering? If it's slow, check your ollama ps to see if it's using the GPU.
  2. Resources: While the model is running, open your Task Manager or Activity Monitor. Watch your RAM usage climb.
  3. Exiting: When you are finished, type /bye or press Ctrl + D.

Challenge: Multi-Model Test

If you have at least 16GB of RAM, try running a second model to compare them:

ollama run mistral

Ask Mistral the same joke. Which one do you think is funnier? Which one felt faster?


Module 2 Checklist

  • I have Ollama running on my system.
  • I have verified the installation with ollama --version.
  • I have successfully pulled and run llama3.
  • I can see the "Llama" icon in my menu bar or system tray.

Coming Up Next...

In Module 3, we will move beyond Llama 3 and explore the Ollama Model Registry. We will learn why choosing the right "Tag" matters and how to find specialized models for coding, math, and foreign languages.


Key Takeaway

You are now part of the local AI revolution. You are no longer dependent on a cloud provider to access state-of-the-art intelligence. Keep it running!

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn