
·AI Security
Module 5 Lesson 4: Model Inversion Attacks
Reverse-engineering the training set. Learn how attackers work backwards from a model's outputs to reconstruct the sensitive images or text used in training.
2 articles

Reverse-engineering the training set. Learn how attackers work backwards from a model's outputs to reconstruct the sensitive images or text used in training.

How to craft the perfect attack. Understand the difference between having the model's 'Code' (White-Box) and only having its 'Answers' (Black-Box).