Module 1 Wrap-up: Inspecting Your Resources
·AI & LLMs

Module 1 Wrap-up: Inspecting Your Resources

Prepare your machine for Ollama. A hands-on guide to checking your hardware and selecting your first model.

Module 1 Wrap-up: Getting Ready for Action

You have completed the theoretical foundation of local LLMs! You now understand the why (privacy, cost, control) and the how (hardware requirements, CPU vs GPU).

Before we move to Module 2 and install Ollama, we need to perform a "System Audit" to ensure your hardware is ready to handle the models you want to run.


Hands-on Exercise: The System Audit

Follow these steps based on your operating system:

For Windows Users:

  1. Check RAM: Press Ctrl + Shift + Esc to open Task Manager. Click "Performance" > "Memory." Note the total size (e.g., 16GB).
  2. Check GPU: In Task Manager, click "GPU." Note the "Dedicated Video Memory" (VRAM). Is it NVIDIA?
  3. Check SSD: Click "Disk" and ensure the type is "SSD."

For macOS Users:

  1. Check Specs: Click the Apple Icon () > "About This Mac." Note the processor (M1/M2/M3/M4) and the Memory size.
  2. Check Disk: Open "Disk Utility" to ensure you have at least 20GB of free space.

For Linux Users:

  1. Check RAM: Run free -h in your terminal.
  2. Check GPU: Run nvidia-smi (if you have an NVIDIA driver installed) to check VRAM.

Selecting Your First Model

Based on your audit, decide which "Starter Model" is right for you:

If your RAM/VRAM is...Your Starter Model should be...Ollama Command (Preview)
4GB - 8GBPhi-3 Mini (3B)ollama run phi3
8GB - 16GBLlama 3 (8B) or Mistral (7B)ollama run llama3
16GB - 32GBMistral Nemo (12B) or Command Rollama run mistral-nemo
64GB+Llama 3 (70B)ollama run llama3:70b

Module 1 Summary

  • Local LLMs are private, free to run, and highly customizable.
  • Hardware is the limiting factor: VRAM is for speed, System RAM is for capacity.
  • Ollama is the bridge that makes all of this accessible.

Coming Up Next...

In Module 2, we are going to actually install Ollama, explore the Command Line Interface (CLI), and run our very first model. Make sure you have your internet connection ready for the downloads!


Module 1 Checklist

  • I understand the 3 benefits of local LLMs.
  • I know how much VRAM/RAM I have.
  • I have selected a starter model based on my specs.
  • I have at least 10GB of free SSD space.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn