
Module 1 Lesson 4: Understanding Capabilities and Limitations
Know what ChatGPT can do, and more importantly, what it cannot do. Avoid common pitfalls like hallucinations and memory limits.
12 articles

Know what ChatGPT can do, and more importantly, what it cannot do. Avoid common pitfalls like hallucinations and memory limits.

How to identify and mitigate AI bias and 'hallucinations' for more objective and reliable results.

How can we make AI reliable enough for a bank or a hospital? In our final lesson of Module 7, we explore the industry best practices for silencing the 'LIAR' in the machine.

How can you tell if an AI is lying? In this lesson, we learn about Logprobs, Self-Consistency checks, and the 'Stochastic Signature' of a hallucination.

Why does it happen? Is it a data gap or a logic failure? In this lesson, we break down the three primary causes of LLM hallucinations: Gaps, Blur, and Eagerness.

Why does an AI sometimes lie with total confidence? In this lesson, we define 'Hallucinations' and learn to identify the difference between a creative slip and a factual failure.
AI is powerful but fallible. Learn how to manage the 'hallucination' risk, detect hidden biases, and protect your company's data in the era of GenAI.
Common failure modes. Why AI makes things up and how to detect biased or incorrect outputs.
Connecting AI to Reality. How to ground AI responses in your own private data to prevent hallucinations.
Understanding the glitch. The psychological and technical causes of AI hallucinations in agentic systems.
Ghost tools. How to handle agents that try to execute functions that don't exist in their toolbox.
Stick to the facts. Techniques to prevent local AI from making up information.