Module 8 Lesson 4: JavaScript Integration
AI in the browser and the server. Building with the Ollama JavaScript library.
JavaScript & TypeScript: AI in the Web Stack
For web developers using Node.js, Next.js, or React, Ollama provides a native JavaScript library that follows the same clean patterns as the Python version.
1. Installation
npm install ollama
2. Server-Side Usage (Node.js)
import ollama from 'ollama'
const response = await ollama.chat({
model: 'llama3',
messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)
3. Streaming in Browser/Node
The JS library uses "Async Generators," which are perfect for real-time UI updates.
import ollama from 'ollama'
const message = { role: 'user', content: 'Write a poem about space.' }
const response = await ollama.chat({ model: 'llama3', messages: [message], stream: true })
for await (const part of response) {
process.stdout.write(part.message.content)
}
4. Frontend Integration (React/Next.js)
When calling Ollama from a Frontend app:
- CORS: Ensure
OLLAMA_ORIGINSis set (see Lesson 1). - Service: It’s often better to call Ollama from a "Server Action" or an "API Route" in Next.js. This protects your users and allows you to easily swap Ollama for a cloud provider later if needed.
5. TypeScript Support
The library is written in TypeScript, so you get full autocompletion for model names, roles (User, Assistant, System), and response types.
Key Takeaways
- The ollama-js library works in both Node.js and the Browser.
- It uses Async Generators for clean streaming logic.
- TypeScript definitions ensure type safety in your AI applications.
- Be careful with CORS if calling Ollama directly from a client-side website.