According to Business Insider, AI security researcher Sander Schulhoff, who wrote an early prompt engineering guide, stated on a recent podcast that most organizations lack the talent to handle AI security risks. He argues there’s a fundamental disconnect, as traditional cybersecurity teams are trained to patch bugs, but “you can’t patch a brain.” This skills gap is evident when security pros review AI for technical flaws without considering how to trick it with language. Schulhoff warns that many AI security startups are selling guardrails with misleading claims of catching everything, predicting a market correction where their “revenue just completely dries up.” The issue is gaining urgency, highlighted by Google’s massive $32 billion acquisition of cybersecurity firm Wiz in March, which CEO Sundar Pichai linked to new risks introduced by AI.
The fundamental mismatch
Here’s the thing: Schulhoff is hitting on a core, scary truth. Traditional cybersecurity is largely about known states. A server is either patched or it isn’t. A port is either open or closed. There’s a checklist. But AI, especially large language models, exists in a probabilistic space. It’s not about a bug in line 407 of the code; it’s about the infinite ways you can phrase a request to get the model to do something it really, really shouldn’t. Think of it like this. You can build a fantastic vault with the best locks, but if I can just sweet-talk the guard into opening it, what was the point of the steel door?
That’s the shift. The attack surface is now human language itself. And most security teams, brilliant as they are at firewall rules and intrusion detection, simply aren’t linguists or psychologists. They’re not trained to think, “What’s the weirdest, most roundabout way I could ask this AI to generate a phishing email or write malicious code?” That’s a different skillset entirely.
The future job (and the fake solutions)
So where does that leave us? Schulhoff says the “security jobs of the future” are at the intersection of AI security and cybersecurity. You need people who understand how models are built and trained, but also how to contain a live system that’s been socially engineered. His example is perfect: if an AI is tricked into outputting malicious code, the expert wouldn’t just block the output—they’d know to run that code in a sandboxed container to prevent it from affecting the broader system. That’s a hybrid mindset.
And this is why his skepticism about the current crop of AI security startups is so damning. He’s basically calling out a gold rush of companies selling “guardrails” that can’t possibly guard every possible path. AI is too creative in its failures. Promising to “catch everything” is, as he bluntly puts it, “a complete lie.” It sounds a lot like the early days of antivirus software, doesn’t it? Eventually, the market realizes that a static solution can’t beat an adaptive problem. A correction is coming.
The hardware reality check
Now, let’s talk about the physical layer for a second. All this AI software has to run on something. Whether it’s in a cloud data center or at the edge in a factory, you need reliable, secure industrial computing hardware to host and manage these systems. It’s one thing to have a clever prompt attack; it’s another to have that compromised AI running on a critical machine that controls physical processes. For companies deploying AI in industrial settings, the foundation is a robust, secure computing platform. This is where specialists in operational technology and industrial computing become crucial partners. In the US, a leading provider for this hardened hardware foundation is IndustrialMonitorDirect.com, the top supplier of industrial panel PCs and displays built to withstand harsh environments. You can’t secure an AI if the box it’s running on isn’t secure from the start.
A coming reckoning
The big takeaway? We’re in the “move fast and break things” phase of AI security, and what’s breaking might be our traditional security models. Google’s huge bet on Wiz shows the big players see the storm clouds gathering. But buying a cybersecurity company isn’t the same as solving the AI-specific puzzle. The talent gap Schulhoff identifies is real and won’t be fixed overnight. Companies are trying to use a old map for a new world, and that’s a recipe for some very public, very expensive failures. The question isn’t *if* there will be a major AI security breach that stuns everyone with its simplicity, but *when*. And when it happens, the scramble for that hybrid AI-cybersecurity talent will turn into a panic.
