
The Practitioner's Dictionary: Common Terminology Explained
Stop being confused by jargon. Master the essential terms from 'Weights' to 'Hallucinations' that will appear on your exam.
Speaking the Language of the Cloud
To pass the AWS Certified AI Practitioner exam, you don't need to be able to math like a data scientist, but you do need to talk like one. The exam will use specific terms for the lifecycle and behavior of AI models. If you misunderstand these terms, you will pick the wrong answer, even if your logic is sound.
In this lesson, we break down the most essential terms into "Plain English" definitions.
1. The Core Lifecycle Terms
Model
The "Brain" of the AI. It’s the final product of training.
- Analogy: A Cookbook that knows how to turn ingredients into a meal.
Training
The process of showing data to an algorithm so it can build a model.
- Analogy: Studying for a test.
Inference
Actually using the model to make a prediction or generate an outcome.
- Analogy: Taking the test.
- Exam Tip: When you call an AWS API (like Rekognition), you are performing Inference.
Ground Truth
The "Correct Answer" in a dataset.
- Analogy: The Answer Key at the back of the book.
2. Performance and Error Terms
Bias
When a model has a systematic prejudice toward certain results (often because the training data was slanted).
- Example: A facial recognition model that only recognizes one skin tone because it was only trained on photos of that skin tone.
Hallucination (Generative AI only)
When an LLM (Large Language Model) generates a confident, grammatical, but completely false statement.
- Analogy: A confident liar who makes up facts to sound smart.
Overfitting
When a model learns the training data "too well"—it memorizes the specific examples instead of understanding the general rules.
- Result: It gets 100% on the training material but fails on "New" real-world data.
- Analogy: A student who memorizes the positions of the answers (A, B, C) instead of the content of the questions.
Parameters (Weights)
The "Adjustable Knobs" inside a model. When people say "GPT-4 has trillions of parameters," it means the model has trillions of connections that were adjusted during training.
3. The Generative AI Specifics
Foundation Model (FM)
A giant, pre-trained model (like Llama, Claude, or Titan) that is capable of wide-ranging tasks (text, image, logic).
- Analogy: A Junior College Graduate who has a broad education but needs "specialization" to do a professional job.
Fine-Tuning
Taking a Foundation Model and giving it a small amount of "Specialized" data to make it an expert in one field.
- Example: Taking a general model and training it on legal documents to make it a "Legal AI."
Prompt
The input or "Instruction" you give to a GenAI model.
Prompt Engineering
The art and science of writing better instructions to get better results from a model.
Visualizing the Logic Chain
graph LR
A[Data + Label] -->|TRAINING| B[Algorithm]
B -->|Generates| C[MODEL: The Knowledge]
C -->|New Data Added| D[INFERENCE: The Prediction]
D -->|If results are bad| E{Adjustment}
E -->|Tune Knobs| F[PARAMETERS]
E -->|Add more niche data| G[FINE-TUNING]
4. Summary Table: The "Cheat Sheet"
| Term | Domain | In Plain English |
|---|---|---|
| Labels | Supervised ML | The answers in the data. |
| Inference | All AI | Using the AI to get a result. |
| Hallucination | GenAI | Making up fake information. |
| Bias | Ethics/Security | Systematic unfairness. |
| Epoch | ML Training | One full pass through the training data. |
Exercise: Use the Terms
You are talking to a developer who says: "Our model works perfectly in the lab, but when we released it to customers, it didn't recognize any of their voices! It seems like the model just memorized the voices of our researchers during the development phase."
Which term best describes this problem?
- A. Inference.
- B. Overfitting.
- C. Hallucination.
- D. Ground Truth.
The Answer is B! The model "Memorized" (overfit) the training data (researchers' voices) instead of learning how to recognize "voices" in general.
Recap of Module 2
We have covered:
- The broad definition of AI.
- The difference between Narrow (specialized) and General (theoretical) AI.
- Why Machine Learning is better than Rule-Based systems for complex tasks.
- The three schools of learning (Supervised, Unsupervised, Reinforcement).
- The essential "Lingo" of an AI Practitioner.
Knowledge Check
?Knowledge Check
In the context of Machine Learning, what is 'Inference'?
What's Next?
We’ve learned the "Core" concepts. Now, let's look at the hottest topic in tech: Module 3: Generative AI Fundamentals. We will go under the hood of Large Language Models and see how a machine "dreams" up new content.