In May 2025, the NSA, CISA, and FBI issued a joint bulletin authored with the cooperation of the governments of Australia, New Zealand, and the United Kingdom confirming that adversarial actors are poisoning AI systems across sectors by corrupting the data that trains them. The models still function — just no longer in alignment with reality.
For CISOs, this marks a shift that is as significant as cloud adoption or the rise of ransomware. The perimeter has moved again, this time inside the large language models (LLMs) being used to train the algorithms. The bulletin’s guide to address the corruption of data via data poisoning is worthy of every CISO’s attention.
AI poisoning shifts the enterprise attack surface
In traditional security frameworks, the goal is often binary: deny access, detect intrusion, restore function. But AI doesn’t break in obvious ways. It distorts. Poisoned training data can reshape how a system labels financial transactions, interprets medical scans, or filters content, all without triggering alerts. Even well-calibrated models can learn subtle falsehoods if tainted information is introduced upstream.