According to Financial Times News, European officials labored through a grueling 36-hour negotiation session in December 2023 to finalize the world’s first comprehensive AI legislation. The AI Act was designed to use Europe’s economic power to enforce “trustworthy AI” through a risk-based approach, banning harmful uses while lightly regulating low-risk systems. But nearly two years later, the European Commission has postponed key parts of the legislation, marking the first formal acknowledgment that Brussels is struggling with its own rules. The rushed inclusion of AI models like ChatGPT in late 2022 fundamentally altered the legislation’s scope, with negotiators feeling pressured to regulate the technology before European parliamentary elections in June 2024. Companies now face massive uncertainty about compliance requirements, while officials grapple with whether the EU moved too far, too fast with regulation.
The ChatGPT effect
Here’s the thing about the AI Act – it wasn’t originally designed to regulate models like ChatGPT at all. The initial draft from the European Commission had zero references to large language models, which were still seen as experimental. But when ChatGPT exploded onto the scene in late 2022, everything changed. Negotiators suddenly felt enormous pressure to include these general-purpose AI models, with one participant asking “What are we even doing here if we exit this room without regulating ChatGPT?”
Then came the open letter from the Future of Life Institute in March 2023 calling for a six-month pause on AI development, signed by Elon Musk, Steve Wozniak, and other prominent figures. Gabriele Mazzini, who helped draft the AI Act, says this “skewed completely the political conversation” toward existential risks rather than practical governance. Combine that with the approaching European parliamentary elections, and you had a perfect storm where, as Mazzini puts it, “common sense seemed to have been lost.”
Why companies are struggling
Now that the AI Act is technically in force since August 2024, the implementation has been… messy, to put it mildly. The legislation left countless details to be determined through future guidelines and standards, creating massive uncertainty for businesses. Even large digital companies need time to prepare for compliance, but the act provides little clarity on what exactly they need to do.
Lawyer Patrick Van Eecke points out a fundamental flaw: the EU treated AI as a static product rather than a dynamic process. An elevator does the same thing today as it will in 20 years, but AI evolves constantly. “This makes it impossible to apply hard-coded requirements,” he says. The result? Larger companies can absorb the compliance costs while startups get buried under regulatory burdens. Alexandru Voica from AI startup Synthesia puts it bluntly: “A lot of these scale-ups will never be able to reach the same sort of size and impact as American or Chinese companies.”
Bigger than Europe
So why does this matter beyond Europe’s borders? The AI Act was supposed to be the gold standard – the world’s first attempt to regulate a technology that could transform every sector of the global economy. If Brussels waters down its legislation beyond relevance, who else will step up to create guardrails?
The timing is particularly ironic. Just as the EU struggles with its AI rules, the global conversation has shifted dramatically from fear-mongering about existential risks to a straight-up race for AI dominance between Washington and Beijing. Europe wanted to be an “AI continent,” but it’s struggling to develop its ecosystem and accelerate investment to compete with global superpowers. After failing to lead on other technologies, this was Europe’s chance to set the standard – but instead, we’re seeing a case study in regulatory overreach.
The path forward
The backlash has already forced the European Commission to change its tune. Under Ursula von der Leyen’s second mandate, boosting competitiveness has become the priority. Large companies including Airbus, BNP Paribas, and Mercedes-Benz have urged the commission to halt the AI Act’s timeline for two years to simplify rules and allow implementation time.
Even Mazzini, one of the architects of the act, now concedes it’s too broad and complex and “doesn’t provide the legal certainty that is needed.” The question isn’t whether the AI Act needs fixing – everyone agrees it does. The real question is whether Europe can course-correct quickly enough to avoid missing the AI boat entirely while the US and China race ahead. Given the EU’s track record with tech regulation, I’m not holding my breath.
