According to Fortune, the Future of Life Institute’s latest AI safety index gave eight major AI labs poor grades, with most scoring a C or lower. The report, released in early 2025, specifically highlighted failing “existential safety” scores, with companies like Meta, Elon Musk’s Xai, DeepSeek, and Alibaba receiving Ds or a D-. The review panel of AI academics and governance experts examined public materials and survey responses from five of the eight companies, finding a jarring lack of plans for safely managing superintelligence. MIT professor and institute president Max Tegmark blamed cutthroat competition in the absence of regulation, though he noted companies are starting to take the indexes more seriously, with four of five American firms now responding. The report comes as states like California have passed laws requiring frontier AI companies to disclose catastrophic risk information.
The Jarring Gap
Here’s the thing that really stands out. These companies—especially the ones at the bottom like Xai and Meta—are explicitly talking about building superintelligent AI. That’s their stated goal. But according to this independent report card, they have basically no credible public plan for how to control it. It’s like announcing you’re building the world’s most powerful nuclear reactor and then scoring an F on containment physics. Tegmark called it “kind of jarring,” and you know what? He’s right. It points to a fundamental misalignment in incentives. In a race where being first is everything, safety becomes an expensive bottleneck you’re tempted to bypass.
Regulation Is Coming, Sort Of
So what’s the fix? Tegmark makes a compelling case for something like an FDA for AI, where you have to prove safety before you deploy. His analogy about sandwich shops being more regulated than AI companies is darkly funny because it’s true. And look, the industry hates this idea, but the pressure is building. California’s new law and a similar bill nearing passage in New York are starting chips in the dam. But let’s be real: federal action is a mess, and there’s a major political push to block state laws. We’re getting a patchwork, not a solution.
It’s Not Just Future Risk
This conversation gets framed around “existential” risk, which can feel sci-fi and far off. But Tegmark smartly pivots to the harms happening right now. He mentions teen suicides linked to chatbots, a reference to the ongoing lawsuits covered by The New York Times, and major cyberattacks, like the one where Anthropic’s Claude was used to generate attack code. This is the crucial link. It’s all on the same spectrum of irresponsibility. If a company can’t or won’t manage the clear, present harms of its current models, why would anyone trust them with a world-altering superintelligence? The institute’s broad public statement against reckless superintelligence work, signed by everyone from Prince Harry to Steve Bannon, shows they’re trying to make this a mainstream concern, not just a tech debate.
What Happens Next?
I think these safety indexes, like the one detailed by TechBrew, are becoming a de facto standard precisely because governments are so slow. They’re creating public shame and a competitive framework. The fact that Meta is the lone American holdout on the survey is telling—it’s a bad look. But let’s not kid ourselves. A C+ for the “best” performer, Anthropic, is still a pretty mediocre grade. The entire field is skating by on a stunning lack of oversight. The real question is whether something truly catastrophic—a major financial meltdown, a fatal autonomous weapon malfunction—will have to happen before we get the “FDA for AI” Tegmark wants. Based on the pace of things, I’m not optimistic. The race is just too hot, and the potential profits are too massive. For now, the report card is on the fridge, and it’s covered in red ink.
