AI and Deepfakes
Elon Musk Slams Grok AI After It Verified a Fake Post Targeting Stephen Miller
In a rare self-rebuke, Tesla and SpaceX CEO Elon Musk publicly criticised his own artificial intelligence assistant, Grok, on Sunday after the chatbot incorrectly verified a fake viral screenshot that falsely depicted him mocking White House Deputy Chief of Staff Stephen Miller over hiring his wife, Katie Miller.
The fabricated post, which has since been deleted, showed Elon Musk replying to Stephen Miller with the biting remark: “Just like I took your wife.” Grok, when prompted by a user to verify the screenshot, responded that the post “likely existed and was deleted,” citing engagement metrics and Elon Musk’s history of pulling down controversial tweets.
But Elon Musk quickly jumped in to set the record straight—it never happened.
The billionaire entrepreneur expressed apparent frustration, blasting Grok’s judgment and emphasizing that the image was completely fabricated. The AI assistant’s error has raised new concerns over the growing challenge of AI-fueled misinformation, especially when it comes from tools designed to combat exactly that.
Grok’s Blunder Comes at a Sensitive Time
The blunder couldn’t have come at a more delicate moment. The false post tapped into the ongoing political tension between Musk and former President Donald Trump, a feud that escalated last week after Musk publicly opposed Trump’s sweeping tax and spending reforms.
The fallout didn’t stop at legislation. Personal attacks soon followed, with Donald Trump threatening government contracts tied to Elon Musk’s ventures, and Musk responding with sharp accusations, including linking Trump to Jeffrey Epstein and supporting calls for the former president’s impeachment.
Stephen Miller, long viewed as Trump’s staunch ally, and his wife Katie Miller, found themselves inadvertently caught in the crossfire. Katie Miller, who served as a key adviser to Musk at the Department of Government Efficiency (DOGE), was among those who exited the White House with Musk in late May.
That context made the fake post all the more explosive, and Grok’s erroneous “verification” of it, even more damaging.
Musk vs Musk’s AI: A Warning on Verification Tech
Grok was built to deliver contextual, real-time summaries of X platform activity. However, its failure here underscores a critical issue: AI-generated summaries can be incorrect, even when they sound plausible.
By stating that the post “aligned with Musk’s behavior,” Grok leaned on circumstantial evidence and patterns, a method that—while intelligent—is far from foolproof in the age of deepfakes and digital manipulation.
AI Tools Under Scrutiny
As AI tools increasingly mediate how users understand and engage with public discourse, this incident signals an urgent need for stricter safeguards and transparency. When even Elon Musk’s own AI gets Musk wrong, it’s clear that human oversight is more essential than ever.
With political tensions flaring and misinformation tools growing more sophisticated, the line between real and fake is thinner than ever—and sometimes, even AI can’t tell the difference.