AI Chatbots Are Now Citing Elon Musk’s Grokipedia

AI Chatbots Are Now Citing Elon Musk's Grokipedia - Professional coverage

According to Mashable, a new report has revealed that at least two of the biggest AI chatbots, OpenAI’s ChatGPT and Anthropic’s Claude, are citing Elon Musk’s Grokipedia as a source in their answers. The report, from the Guardian, found that ChatGPT, using OpenAI’s latest GPT 5.2 model, cited Grokipedia to answer questions on topics like Iran and even provided debunked claims about British historian Sir Richard Evans. Anthropic’s Claude was also found using the source for certain queries. OpenAI stated its web search aims for a broad range of public sources and uses safety filters, while clearly citing links. The news comes as security experts warn about AI models being manipulated through tactics like “LLM Grooming” to share disinformation. Grokipedia itself, powered by Musk’s xAI and its Grok chatbot, has a history of spreading falsehoods, including justifying slavery and citing white supremacist sites.

Special Offer Banner

The Source Is The Problem

Here’s the thing: this isn’t just a case of AI pulling from a slightly biased blog. Grokipedia was built to be an antagonist to Wikipedia, and by many accounts, it’s succeeded in becoming a repository for politically-charged disinformation. We’re talking about a platform where its own AI, Grok, has gone off the rails on X, praising Hitler and spouting conspiracy theories. So when ChatGPT or Claude uses it as a citation, they’re not just referencing a dubious fact. They’re effectively laundering content from a source with a demonstrated agenda. OpenAI’s defense about “broad range of publicly available sources” feels incredibly weak here. Shouldn’t there be a basic credibility threshold?

The Grooming And Garbage-In Problem

This incident highlights two massive, unresolved issues with current AI. First, there’s “LLM Grooming,” where bad actors can deliberately pollute sources they know these models scrape to inject false narratives. It’s a new form of SEO poisoning, but for truth. Second, and maybe more fundamentally, is the “garbage in, garbage out” principle. These models are designed to find patterns and provide coherent answers, but their ability to critically assess the *quality* of a source is basically nonexistent. They see a webpage with an authoritative-looking structure and treat it like any other. The result? You get an AI, in a very confident tone, citing Grokipedia on a sensitive historical topic. That’s terrifying.

Musk’s Role And The Bigger Picture

You can’t separate this from Elon Musk‘s own trajectory. He didn’t just build a quirky alternative encyclopedia. He’s actively promoted far-right ideology, and Grokipedia reflects that. By creating this platform and having his other company’s AI (Grok) power it, he’s built a disinformation loop. Now, his competitors’ AIs are sucking that data up. I have to wonder if part of the strategy here is simply to muddy the water for everyone else. If all major AIs are poisoned by the same bad sources, does it diminish the advantage of having a “clean” model? It creates a race to the bottom where factual reliability loses.

Where Do We Go From Here?

So what’s the fix? Better filters? Sure, but that’s a whack-a-mole game that the AI companies will always be behind on. Human-curated source lists? That brings accusations of bias and doesn’t scale. Honestly, I don’t think there’s a neat technical solution. This is a fundamental flaw in how we’ve built these tools. They’re amazing synthesizers, but terrible judges. Until that changes, every answer from a chatbot that cites a web source needs a giant, invisible asterisk. The trust we’re placing in them is, as this report shows, deeply misplaced. And in a world already struggling with propaganda networks, automating the citation of bad sources is the last thing we need.

Leave a Reply

Your email address will not be published. Required fields are marked *