
·AI Security
Module 18 Lesson 3: Adversarial Reprogramming
New task, old model. Learn how attackers 'Reprogram' pre-trained models to perform entirely different (and potentially malicious) tasks without changing any weights.
2 articles

New task, old model. Learn how attackers 'Reprogram' pre-trained models to perform entirely different (and potentially malicious) tasks without changing any weights.

How attackers inject malicious behavior into models. Explore the mechanics of data poisoning and how small amounts of bad data can compromise global models.