Are AI Agents Compromised By Design?

This is an interesting post on Slashdot.org about AI agents, which are more about spying than being useful. The PDF article is linked in the quoted section. And it hits on one of my biggest complaints with AI, the unreliable foundation of information fed into the system with built-in biases.

https://yro.slashdot.org/story/25/10/14/2054250/are-ai-agents-compromised-by-design

Posted by BeauHD on Tuesday October 14, 2025 @07:20PM from the AI-security-trilemma dept.

Longtime Slashdot reader Gadi Evron writes:

Bruce Schneier and Barath Raghavan say agentic AI is already broken at the core. In their IEEE Security & Privacy essay, they argue that AI agents run on untrusted data, use unverified tools, and make decisions in hostile environments. Every part of the OODA loop (observe, orient, decide, act) is open to attack. Prompt injection, data poisoning, and tool misuse corrupt the system from the inside. The model’s strength, treating all input as equal, also makes it exploitable. They call this the AI security trilemma: fast, smart, or secure. Pick two. Integrity isn’t a feature you bolt on later. It has to be built in from the start.

“Computer security has evolved over the decades,” the authors wrote. “We addressed availability despite failures through replication and decentralization. We addressed confidentiality despite breaches using authenticated encryption. Now we need to address integrity despite corruption.”

“Trustworthy AI agents require integrity because we can’t build reliable systems on unreliable foundations. The question isn’t whether we can add integrity to AI but whether the architecture permits integrity at all.”