Module 3 Lesson 3: Popular Ollama Models
Meet the family. A guide to the most important open-weights models available in Ollama today.
Popular Ollama Models: Picking the Right One
With hundreds of models in the registry, it can be overwhelming to choose. Let's look at the "Mount Rushmore" of open-weights models and where they excel.
1. Meta’s Llama 3 (The Heavyweight Champion)
Llama 3 is the most famous open-weights model in the world.
- Sizes: 8B and 70B (and 405B in the cloud).
- Strengths: Best general reasoning, massive training dataset, excellent at following complex instructions.
- Best for: Everything. If you don't know which model to use, start with
llama3.
2. Mistral / Mixtral (The Efficiency Kings)
Hosted by Mistral AI in France, these models punch far above their weight.
- Sizes: 7B (Mistral) and 8x7B (Mixtral).
- Strengths: "MoE" (Mixture of Experts) architecture allows the model to be fast like a small model but smart like a large one.
- Best for: Rapid-fire chat, coding, and creative writing.
3. Google’s Gemma 2 (The Academic)
Built using the same technology as Google's Gemini.
- Sizes: 2B, 9B, and 27B.
- Strengths: Excels at math, logical reasoning, and structured data (like JSON).
- Best for: Scientific data processing and logic-heavy workflows.
4. Microsoft’s Phi-3 (The Tiny Prodigy)
Microsoft proved that size isn't everything.
- Sizes: Mini (3.8B), Small (7B), Medium (14B).
- Strengths: Extremely small. It can run on a high-end smartphone or a very old laptop.
- Best for: Devices with very low RAM (like a Raspberry Pi or a browser).
5. Specialist Models
Sometimes you need a specific tool for a specific job:
- Codellama / Deepseek-Coder: Trained specifically on millions of lines of code.
- Llava: A "Multimodal" model. You can send it an image and it can describe it.
- Command R: Optimized for RAG (Retrieval Augmented Generation) and long-context windows.
Model Recommendation Table
| Task | Recommended Model | RAM Needed |
|---|---|---|
| Daily Chat | llama3 | 8GB+ |
| Writing Code | deepseek-coder | 8GB+ |
| Logic/Math | gemma2:9b | 16GB+ |
| Image Analysis | llava | 8GB+ |
| Old Laptop | phi3:mini | 4GB+ |
Practice Exercise
Try running a "specialist" model. If you are a developer, pull deepseek-coder and ask it to write a Python script for a simple web scraper. Note how much faster (or smarter) it is at coding than the general Llama 3 model.
Key Takeaways
- Llama 3 is the best all-rounder.
- Mistral is known for speed and creative writing.
- Gemma 2 is Google's logic-focused offering.
- Phi-3 is for resource-constrained environments.
- Use Specialist models for coding and vision to get better results than general models.