
Module 9 Lesson 3: Datasets for Fine-Tuning
How much data do you really need to teach an AI a new trick? In our final lesson of Module 9, we learn about the 'Less is More' philosophy of fine-tuning datasets.
8 articles

How much data do you really need to teach an AI a new trick? In our final lesson of Module 9, we learn about the 'Less is More' philosophy of fine-tuning datasets.

How do you customize a 70-billion parameter model on a single GPU? In this lesson, we learn about LoRA and PEFT—the breakthroughs that democratized AI fine-tuning.

General-purpose LLMs are good for many things, but sometimes you need a specialist. In this lesson, we explore the reasons to fine-tune your own version of an LLM.

An LLM isn't 'born' knowing how to be a helpful assistant. It goes through two distinct life stages: Pretraining and Fine-Tuning. Learn why both are critical.
RAG vs Fine-Tuning. Knowing when to give the AI a book and when to perform surgery on its brain.
Garbage In, garbage out. How to format your data in JSONL for successful fine-tuning.
Review and Next Steps. Transitioning from a model user to a model builder.
Fine-tuning the engine. A dictionary of PARAMETER options to control speed, creativity, and memory.