
Module 1 Lesson 2: How LLMs Differ from Traditional Software
In this lesson, we explore the fundamental shift in computing: moving from rigid 'If-Then' logic to the fluid, probabilistic nature of Large Language Models.
Module 1 Lesson 2: How LLMs Differ from Traditional Software
If you are a developer, you are used to a world where code is predictable. If you write x = 1 + 1, then x is always 2. This is Deterministic Software.
Large Language Models (LLMs) represent a massive shift to Probabilistic Computing. In this lesson, we will compare these two worlds and understand why LLMs behave so differently from the software we've built for the last 50 years.
1. Determinism vs. Probability
Traditional Software (Rules-Based)
Traditional software is build on rigid branches. An engineer writes specific instructions:
"If the user's age is under 18, then show the minor template. Else, show the adult template."
There is no "vibe" or "guesswork". The computer follows the path exactly as specified. If there is a bug, it's because a human wrote the wrong rule.
LLMs (Pattern-Based)
LLMs don't have "If-Then" branches for their logic. Instead, they have billions of weighted connections.
- When you ask an LLM a question, it doesn't "lookup" the answer in a database.
- It starts a statistical cascade.
The model looks at your prompt and asks: "Based on everything I've seen, what is the most likely next word?" It picks a word, then repeats the process for the word after that.
graph TD
subgraph "Traditional Software"
Input1["User Data"] --> Code["Defined Logic (If/Else)"]
Code --> Output1["Predictable Result"]
end
subgraph "Large Language Models"
Input2["Prompt Context"] --> Model["Weighted Neural Mesh"]
Model --> Output2["Probabilistic Response"]
end
2. Why Outputs Vary (The Randomness Factor)
Have you ever noticed that asking the same question to ChatGPT twice can give slightly different answers?
In traditional software, this would be a critical bug. In LLMs, it’s a feature called Sampling. Because the model deals in probabilities, it doesn't always have to pick the absolute #1 most likely word. It can "sample" lower-probability words to provide variety and creativity.
[!NOTE] We control this randomness using a setting called Temperature. Higher temperature = more creative/random. Lower temperature = more focused/predictable.
3. Debugging vs. Alignment
In traditional software, you find a bug by looking at the specific line of code that caused the failure. You fix the logic, and the bug is gone.
With LLMs, you cannot do this. There is no "line of code" that says "Be rude to the user." That behavior emerges from the complex interaction of billions of parameters. Instead of "fixing code," we use Alignment:
- We give the model feedback on what answers are "good" and "bad."
- The model adjusts its internal weights to prefer the "good" paths.
4. Summary Table
| Feature | Traditional Software | Large Language Models |
|---|---|---|
| Logic | Fixed Rules (Deterministic) | Patterns & Probability (Stochastic) |
| Input | Structured (Data fields) | Unstructured (Natural Language) |
| Maintenance | Code Bug-fixes | Prompt Tuning / Alignment |
| Scaling | Add more features/logic | Add more data/parameters |
Lesson Exercise
The Comparison: Write an "If-Then" pseudo-code block to check if a sentence contains a greeting. Then, write a prompt for an LLM to do the same thing.
Observation: Notice how the pseudo-code requires you to list every possible greeting (Hello, Hi, Greetings, Hey), whereas the LLM "just knows" based on context.
What’s Next?
Now that we understand the "Mechanical" difference, let's look at the "Practical" impact. In Lesson 3, we will explore Real-World Uses of LLMs, categorized by where they offer the most value over traditional systems.