
·AI Security
Module 12 Lesson 1: PII in Training Data
Your data, remembered forever. Learn how Large Language Models accidentally memorize and leak Personally Identifiable Information from their training sets.
3 articles

Your data, remembered forever. Learn how Large Language Models accidentally memorize and leak Personally Identifiable Information from their training sets.

Why models shouldn't talk about their past. Explore the risks of personal data leaking from training sets and the 'over-memorization' problem in LLMs.

How LLMs recite their training data. Explore the 'Memorization vs. Learning' trade-off and how to prevent your model from leaking secrets.