
Compliance & Regulation: The EU AI Act and Beyond
The legal landscape is changing. Learn about the risk-based approach of the EU AI Act and how to classify your AI projects to stay legal.
The Wild West is Over
For a long time, AI was unregulated. That changed with the EU AI Act. Even if you are not in Europe, this legislation sets the "Brussels Effect"—global standards that most multinational companies will follow.
As a Leader, you need to know which of your projects are "Banned," "High Risk," or "Minimal Risk."
1. The Risk-Based Pyramid
The EU AI Act does not treat all AI the same. It classifies systems by risk.
graph TD
subgraph "Unacceptable Risk BANNED"
A[Social Scoring, Real-time Bio-surveillance]
end
subgraph "High Risk (HEAVILY REGULATED)"
B[Hiring, Medical Devices, Credit Scoring, Law Enforcement]
end
subgraph "Limited Risk (TRANSPARENCY)"
C[Chatbots, Deepfakes, Emotion Recognition]
end
subgraph "Minimal Risk (NO RESTRICTION)"
D[Spam Filters, Video Games, Inventory Management]
end
A --> B
B --> C
C --> D
style A fill:#EA4335,stroke:#fff,stroke-width:2px,color:#fff
style B fill:#FBBC04,stroke:#fff,stroke-width:2px,color:#fff
style C fill:#4285F4,stroke:#fff,stroke-width:2px,color:#fff
style D fill:#34A853,stroke:#fff,stroke-width:2px,color:#fff
1. Unacceptable Risk (Prohibited)
- Examples: Government social scoring (like Black Mirror), subliminal manipulation of children, real-time remote biometric identification in public spaces.
- Action: Do not build these.
2. High Risk (Regulated)
- Examples: Anything that affects a person's life path. CV-sorting algorithms for hiring, loan approval systems, grading exams.
- Action: Must have strict logs, human oversight, high-quality data governance, and detailed documentation.
3. Limited Risk (Transparency)
- Examples: Chatbots (Customer Service), Deepfakes.
- Action: Disclosure. You must tell the user "You are talking to a machine." You must label deepfakes as "AI Generated."
4. Minimal Risk
- Examples: 90% of use cases. Spam filters, search engines, optimization games.
- Action: No new obligations.
2. Generative AI Specifics (GPAI)
The Act has specific rules for General Purpose AI (GPAI) models like Gemini.
- Copyright: Providers must respect EU copyright law.
- Training Data Summary: Providers must publish a detailed summary of the content used to train the model.
3. Watermarking (SynthID)
To comply with Transparency rules, Google introduced SynthID.
- It embeds an invisible watermark into the pixels of Imagen-generated images or the audio wave of AI speech.
- It allows tools to detect if content is AI-generated, even if it is cropped or resized.
- Leadership Move: Enable SynthID for all external-facing content to ensure compliance with future "Right to Know" laws.
4. Summary of Module 5
- Principles: Google's 7 Principles guide ethical development.
- Privacy: Vertex AI isolates enterprise data; Consumer AI does not.
- Risk: Regulation is based on impact (Medical = High Risk, Spam Filter = Low Risk).
- Transparency: Users have a right to know they are talking to an AI.
Next Steps: You have learned the Tech (Mod 1-2), the Tactics (Mod 3), the Strategy (Mod 4), and the Ethics (Mod 5). Now, we prepare for the final boss: The Exam. Module 6 covers the exam structure and practice questions.
Knowledge Check
?Knowledge Check
Your marketing team wants to use AI to generate highly realistic videos of your CEO announcing a sale, but speaking in 20 different languages he doesn't actually speak. Under the 'Limited Risk' category of the EU AI Act, what is the primary obligation?