
·AI Security
Module 10 Lesson 1: RAG Context Poisoning
The knowledge base is the weapon. Learn how attackers inject malicious 'facts' into RAG systems to influence AI responses from the inside.
5 articles

The knowledge base is the weapon. Learn how attackers inject malicious 'facts' into RAG systems to influence AI responses from the inside.

The trojan horse. Learn how attackers embed prompt injection payloads inside legitimate-looking documents to hijack RAG sessions during retrieval.

Need-to-know AI. Learn how to implement Document-level Access Control (ACLs) to prevent an AI from accidentally leaking sensitive data to unauthorized users.

Data bridge security. Learn how to secure LlamaIndex data loaders, prevent context poisoning, and implement private data connectors.

Know your vectors. Learn the difference between a user attacking their own session (Direct) and an attacker poisoning external data (Indirect).