Module 2 Lesson 3: Supported Operating Systems
·AI & LLMs

Module 2 Lesson 3: Supported Operating Systems

Cross-platform AI. Exploring how Ollama runs on macOS, Windows, and Linux, and the unique advantages of each.

Supported Operating Systems: AI Everywhere

One of Ollama's greatest strengths is its cross-platform nature. Whether you are a Windows gamer, a creative using a Mac, or a developer on Linux, Ollama provides a native experience tailored to your OS.

1. macOS (The "Gold Standard")

Ollama launched first on macOS, and for a long time, it was the best way to experience it.

  • Installer: A simple .zip that moves to your Applications folder.
  • Performance: Automatically leverages the Metal API and Unified Memory.
  • Tray App: On Mac, Ollama runs a small icon in the menu bar, making it easy to see when it's running.

2. Windows (The Powerhouse)

For a long time, Windows users had to use WSL (Windows Subsystem for Linux) to run Ollama. Now, there is a Native Windows Installer.

  • Installer: An .exe file that sets up the system tray application.
  • GPU Support: It detected NVIDIA (CUDA) cards automatically. It also supports AMD GPUs via ROCm (often a bit more complex, but getting easier).
  • The WSL Alternative: Many developers still prefer running Ollama inside WSL2 (Ubuntu) to keep their AI development environment separate from their main OS files.

3. Linux (The Server Choice)

If you are running Ollama on a home server or in the cloud, Linux is the way to go.

  • Quick Install: A single-line script: curl -fsSL https://ollama.com/install.sh | sh.
  • Systemd: Ollama on Linux runs as a system service. This means it starts automatically when the computer boots, which is perfect for headless servers.
  • NVIDIA Drivers: You must have the NVIDIA container toolkit or standard drivers installed for Ollama to "see" your GPU.

4. Docker (The Portable Choice)

If you don't want to install Ollama directly on your OS, you can run it inside a Docker container.

  • Image: ollama/ollama.
  • Advantage: You can spin it up and tear it down in seconds.
  • GPU Passthrough: On Windows and Linux, you can "pass through" your GPU to the container so that it doesn't just use the CPU.

Summary of OS Differences

OSInstallationGPU AccelerationBest Use Case
macOSSimple AppMetal (Unified)Laptops & Creative Work
WindowsNative .exeCUDA (NVIDIA)Gaming PCs & General Use
LinuxShell ScriptCUDA / ROCmHome Servers & Cloud
Dockerdocker pullHardware PassthroughDevelopment & Prototyping

Visualizing Ollama's Cross-Platform Architecture

graph TD
    User[User Application] --> Ollama[Ollama Server]
    
    subgraph macOS
        Ollama --> Metal[Metal API]
        Metal --> M[Unified Memory]
    end
    
    subgraph Windows
        Ollama --> CUDA[CUDA/ROCm]
        CUDA --> GPU_W[NVIDIA/AMD GPU]
    end
    
    subgraph Linux
        Ollama --> Systemd[Systemd Service]
        Systemd --> GPU_L[GPU Drivers]
    end

Which Should You Choose?

  • Use Native Windows/Mac if this is your primary computer.
  • Use Linux if you have an old PC you've turned into an "AI Box."
  • Use Docker if you are a developer building an app suite (e.g., Ollama + Web UI + Database).

Key Takeaways

  • Ollama is native to Windows, macOS, and Linux.
  • macOS leverage Metal; Windows/Linux leverage CUDA.
  • Linux installation is optimized for servers, while Windows/Mac are optimized for desktop users.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn