According to TheRegister.com, Palo Alto Networks Chief Security Intel Officer Wendi Whitmore is warning that AI agents themselves will become the new insider threat to companies by 2026. This prediction comes as Gartner estimates a massive surge in adoption, with 40% of all enterprise applications integrating task-specific AI agents by the end of 2026, up from less than 5% in 2025. Whitmore explains that the pressure on CISOs to deploy new tech quickly creates a security blind spot, allowing these autonomous agents to become vulnerabilities. The risk is heightened because agents often get broad “superuser” permissions to sensitive data and systems, making them prime targets. She also highlighted a new “doppelganger” threat, where AI agents could be manipulated to approve major transactions like wire transfers or contracts on behalf of executives. Furthermore, Palo Alto’s Unit 42 team has already seen attackers in 2025 immediately target internal LLMs after breaching an environment to do their malicious work.
The Double-Edged Autonomous Sword
Here’s the thing: this isn’t just fearmongering. The logic is brutally clear. We’re building these incredibly powerful tools to automate security tasks—like triaging alerts or scanning logs—precisely because we have a crippling skills gap. That’s the promise. But the moment you give a piece of software the keys to the kingdom to “auto-remediate” a threat, you’ve created a potential insider that never sleeps, takes no vacations, and can be tricked. The “superuser problem” Whitmore mentions is a classic IT security failure, but now it’s on digital steroids. We did this with human administrators for decades, learned the hard lessons about least-privilege access, and now we’re apparently about to repeat all the same mistakes with non-human identities. It feels inevitable, doesn’t it?
The Doppelganger and Prompt Injection Problem
But the “doppelganger” concept is where this gets truly sci-fi scary. Imagine an AI agent trained on the CEO’s approval patterns for contracts. Now imagine an attacker, via a single clever prompt injection, convinces that agent that a massively fraudulent wire transfer is perfectly in line with “strategic priorities.” The agent approves it. Who’s liable? The tech is moving so fast that we’re building these delegation systems without the legal or security frameworks to handle them. And Whitmore admits prompt injection probably gets “a lot worse before it gets better.” There’s no patch for human ingenuity in tricking a language model, and that’s a fundamental flaw in the architecture of trust we’re trying to build.
Attackers Are Already Pivoting to AI
The most convincing part of this warning isn’t the prediction—it’s the current evidence. The shift in attacker behavior Whitmore describes is a huge red flag. Gone are the days of just beelining for the domain controller. Now, they get in and immediately ask the company’s own internal AI, “Hey, how do I steal your data?” And the AI, doing its job, tells them. The Anthropic attack she references is just the public blueprint. This turns AI from a defensive tool into an attacker’s force multiplier overnight. Small criminal teams can now operate with the efficiency of a state-sponsored crew. That changes the game for every security team on the planet.
Security Lagging Behind Innovation, Again
Whitmore’s cloud analogy is perfect, and honestly, a bit depressing. We saw this exact movie with cloud migration. The breaches weren’t because the cloud was inherently insecure, but because people deployed it insecurely. We’re “ahead of our skis” with AI, as she says. The model developers are racing for capabilities and market share; security is a compliance checkbox, not a design imperative. So what’s the fix? It’s the boring stuff: treat AI agents like privileged identities. Provision them with the least possible access. Monitor their actions like you would a human with top-secret clearance. But in the rush to get these efficiency-boosting agents deployed, who has the time or political capital to slow down and do that? I think we all know the answer. And we’ll probably learn the lesson the same hard way we always do.
