Module 1 Wrap-up: Inspecting Your Resources
Prepare your machine for Ollama. A hands-on guide to checking your hardware and selecting your first model.
Module 1 Wrap-up: Getting Ready for Action
You have completed the theoretical foundation of local LLMs! You now understand the why (privacy, cost, control) and the how (hardware requirements, CPU vs GPU).
Before we move to Module 2 and install Ollama, we need to perform a "System Audit" to ensure your hardware is ready to handle the models you want to run.
Hands-on Exercise: The System Audit
Follow these steps based on your operating system:
For Windows Users:
- Check RAM: Press
Ctrl + Shift + Escto open Task Manager. Click "Performance" > "Memory." Note the total size (e.g., 16GB). - Check GPU: In Task Manager, click "GPU." Note the "Dedicated Video Memory" (VRAM). Is it NVIDIA?
- Check SSD: Click "Disk" and ensure the type is "SSD."
For macOS Users:
- Check Specs: Click the Apple Icon () > "About This Mac." Note the processor (M1/M2/M3/M4) and the Memory size.
- Check Disk: Open "Disk Utility" to ensure you have at least 20GB of free space.
For Linux Users:
- Check RAM: Run
free -hin your terminal. - Check GPU: Run
nvidia-smi(if you have an NVIDIA driver installed) to check VRAM.
Selecting Your First Model
Based on your audit, decide which "Starter Model" is right for you:
| If your RAM/VRAM is... | Your Starter Model should be... | Ollama Command (Preview) |
|---|---|---|
| 4GB - 8GB | Phi-3 Mini (3B) | ollama run phi3 |
| 8GB - 16GB | Llama 3 (8B) or Mistral (7B) | ollama run llama3 |
| 16GB - 32GB | Mistral Nemo (12B) or Command R | ollama run mistral-nemo |
| 64GB+ | Llama 3 (70B) | ollama run llama3:70b |
Module 1 Summary
- Local LLMs are private, free to run, and highly customizable.
- Hardware is the limiting factor: VRAM is for speed, System RAM is for capacity.
- Ollama is the bridge that makes all of this accessible.
Coming Up Next...
In Module 2, we are going to actually install Ollama, explore the Command Line Interface (CLI), and run our very first model. Make sure you have your internet connection ready for the downloads!
Module 1 Checklist
- I understand the 3 benefits of local LLMs.
- I know how much VRAM/RAM I have.
- I have selected a starter model based on my specs.
- I have at least 10GB of free SSD space.