According to Forbes, enterprises are racing to integrate AI but consistently skipping the most critical first step: clearly defining what problem they’re actually trying to solve. Vijay Mehta, Chief Data & Technology Officer at Experian, argues that real enterprise AI success depends on focusing on the “plumbing”—things like model drift detection, compliance automation, and prompt injection risk management—rather than just building models. Experian recently launched the Experian Assistant for Model Risk Management, an AI-powered solution that automates documentation, validation, and compliance audits while aligning with global regulatory standards like SR 11-7 and SS1/23. The biggest reason AI pilots fail isn’t model performance but missing fundamentals like clean data, version control, and operational pipelines. Organizations are discovering that building AI proof of concepts is easy, but operationalizing them at scale is where most initiatives collapse.
The plumbing-first reality
Here’s the thing that most companies don’t want to hear: AI isn’t magic. It’s engineering. And just like you wouldn’t build a skyscraper without checking the foundation first, you can’t build enterprise AI without solid data infrastructure and governance. Vijay’s point about starting with intent rather than tools hits hard because so many teams jump straight to building models without asking why they’re building them in the first place.
I see this all the time—companies get excited about the latest AI demo and immediately want to implement something similar. But they haven’t mapped their data flows. They haven’t considered how decisions actually get made in their organization. They’re essentially trying to build the penthouse before pouring the foundation. No wonder so many AI projects end up in what Forbes calls “pilot purgatory”—stuck in perpetual testing mode without ever delivering real business impact.
Governance isn’t boring—it’s critical
When Vijay talks about building governance directly into the model lifecycle, he’s addressing what keeps enterprise leaders up at night. In regulated industries like banking and healthcare, model risk management isn’t optional—it’s mandatory. But here’s what’s interesting: even in less regulated industries, the same principles apply. Customers expect transparency. Stakeholders demand accountability. And let’s be honest—do you really want AI making decisions about your credit, your healthcare, or your business without being able to audit how those decisions were made?
The Experian Assistant for Model Risk Management represents a shift toward what I’d call “governance by design.” Instead of treating compliance as an afterthought—something you bolt on after the model is built—they’re baking it into every stage. Continuous monitoring, automatic documentation, version control. This isn’t just about checking regulatory boxes. It’s about building systems that people can actually trust.
Escaping pilot purgatory
So why do so many AI initiatives stall between pilot and production? Basically, it’s the difference between building something that works in a controlled environment versus something that works in the messy real world. Vijay nailed it when he said organizational silos are often the real problem. Data science teams build something amazing, but without alignment across operations, compliance, and IT, that amazing thing never sees the light of day.
And here’s another insight that resonated: companies often optimize for model accuracy instead of real-world outcomes. A model that’s 2% more accurate but impossible to deploy isn’t progress—it’s academic exercise. The shift from project mindset to product mindset is crucial here. Treating models as living assets that need maintenance and iteration changes everything. It means you’re not just building something once and walking away—you’re committing to its ongoing success.
What real success looks like
Vijay’s definition of AI success is telling: “You know you’re succeeding when the business starts to rely on AI without even thinking about it.” That’s the goal—AI becoming so embedded in workflows that it’s invisible. Not because it’s hidden, but because it just works. It’s the difference between having to manually check something versus having confidence that the system will flag issues automatically.
But maybe the most mature insight is knowing when to walk away. Responsible AI means being willing to retire models that aren’t delivering value. It’s not about deploying AI everywhere—it’s about deploying it where it truly makes a difference. That takes discipline that many organizations haven’t developed yet. They’re so focused on being “AI-first” that they forget to ask whether AI is actually solving their most important problems.
Look, enterprise AI isn’t going anywhere. But the organizations that succeed will be the ones that master the unsexy fundamentals first. They’ll ask the hard questions upfront. They’ll build the plumbing before worrying about the faucets. And they’ll understand that real innovation isn’t about having the shiniest tools—it’s about having the most reliable foundations.
