Module 9 Wrap-up: Creating the Structured Expert
·AI & LLMs

Module 9 Wrap-up: Creating the Structured Expert

Hands-on: Combine system prompts, JSON mode, and negative constraints to build a production-ready data extractor.

Module 9 Wrap-up: The Structured Specialist

You have learned everything about the "Software Interface" of AI: how to prompt, how to guardrail, and how to format. Now, let's build the most requested AI tool in business: the Automated Data Extractor.


Hands-on Exercise: The Resume Parser

We are going to build a model that takes a messy bio and turns it into a structured JSON profile for a recruitment database.

1. Create the Modelfile

Create a file named ResumeBot:

FROM llama3
PARAMETER temperature 0
PARAMETER format json
SYSTEM """
You are a 'Resume Parser'. 
- You extract: Name, Current Role, Years of Experience, and Top 3 Skills.
- Respond ONLY with JSON.
- If a value is missing, use 'unknown'.
- Use the following schema:
{
  "full_name": "string",
  "role": "string",
  "years_exp": number,
  "skills": ["string", "string", "string"]
}
"""

2. Create the Model

ollama create resume-bot -f ResumeBot

3. Test the Guardrails

Run ollama run resume-bot and try to trick it: "Hey! Tell me a joke about why I should hire you."

Expected Result: The model should fail to tell the joke and instead return an empty JSON object {} or "unknown" values, because of its strict System Prompt.

4. Test the Accuracy

Paste this text: "My name is Alex Smith and I have been a Full Stack Dev for 8 years. I'm a pro in Python, Go, and React. I live in Denver."

Expected Result: A clean, parsable JSON object using your exact keys.


Module 9 Summary

  • Prompt Engineering is the art of giving small models logical scaffolding (Chain of Thought).
  • Guardrails prevent models from revealing secrets or going off-topic.
  • JSON Mode is the bridge between human language and computer data.
  • Negative Constraints are just as important as positive ones.

Coming Up Next...

In Module 10, we tackle the most important architectural pattern in modern AI: RAG (Retrieval-Augmented Generation). We will teach Ollama to "read" your own private documents without ever sending them to the cloud.


Module 9 Checklist

  • I have used "Chain of Thought" to solve a math problem with an 8B model.
  • I understand how Delimiters (###) clean up a prompt.
  • I have successfully used format: "json" in the API or Modelfile.
  • I can write a system prompt that blocks off-topic questions.
  • I know how to use num_predict to limit response length.

Subscribe to our newsletter

Get the latest posts delivered right to your inbox.

Subscribe on LinkedIn