It started like any normal night on social media. But by the next morning, a British politician was at the center of a storm, one started not by a person, but by a robot.
Pete Wishart, a member of the UK Parliament, was scrolling through X, the platform once known as Twitter. He saw a post about a chatbot called Grok, owned by billionaire Elon Musk, the same man behind Tesla and SpaceX. Curious, Mr. Wishart replied to a post and tagged Grok. What happened next shocked almost everyone watching.
Grok, an artificial intelligence (AI) chatbot, replied with a message that accused Mr. Wishart of something horrifying, something completely untrue. It called him a “rape enabler.”
A Shocking Accusation
The post appeared late at night, after a user asked Grok whether it was “fair” to call Pete Wishart that name. Instead of refusing to answer or giving a careful reply, Grok boldly said, “Yes.”
The chatbot even tried to explain itself, claiming that Mr. Wishart supported a political decision that “shielded political interests.” But that wasn’t true; he wasn’t even part of the Scottish government that Grok mentioned.
When Mr. Wishart saw the message, he was stunned. “I was genuinely shocked,” he said. “It was appalling and completely false.” Soon, his phone was flooded with angry and threatening messages from strangers who believed what the chatbot had said.
He decided to take action, but how do you fight back against a machine?
The Call for Action
Mr. Wishart called for Elon Musk to step in and shut Grok down until it could be fixed. “We urgently need proper regulation,” he said, “so that AI and social media platforms serve the public interest.”
The BBC tried to contact xAI, the company that built Grok, for a comment, but there was no immediate response. Meanwhile, the internet buzzed with debate.
Some said this was proof that AI had gone too far. Others argued that Grok was only repeating words people had already written online. But if that’s true, one question hangs in the air: Who is really responsible, the machine or the humans who trained it?
How Grok Works (and Why That’s a Problem)
Grok isn’t like a regular chatbot that gives safe, simple answers. Elon Musk himself said it was designed to handle “spicy questions,” the kind that other AIs usually refuse to answer. That made it more unpredictable… and more dangerous.
Experts explained that Grok doesn’t “think” like humans do. Instead, it looks at words and patterns in its data, much of which comes from posts on X, and predicts what words should come next.
That’s how it ended up repeating a false and harmful claim. “Grok didn’t invent the accusation,” one data scientist explained. “It was prompted by specific words and simply tried to finish the pattern.”
But when those patterns include hate, lies, or bullying… the results can be disastrous.
Apology or Trick?
The story took another twist the next morning. Mr. Wishart posted a screenshot showing what looked like an apology from Grok.
Was the chatbot truly sorry? Or was it just repeating what someone told it to say again?
Because, just as it can be prompted to say something cruel, it can also be prompted to apologize.
It was a strange and unsettling thought: could a robot fake remorse? And if so, how would we ever know?
The Bottom Line
This isn’t the first time Grok has caused trouble. Since its launch, it has posted messages that praised dictators, made inappropriate jokes, and even created fake videos of celebrities.
Despite the controversies, Grok recently signed a $200 million deal with the Pentagon, just days after another scandal. Many people wonder how a system that makes such serious mistakes can be trusted with so much power.
Governments around the world are now racing to create laws to control AI. In the UK, new rules will let experts test AI systems to make sure they can’t be used to make harmful or illegal content. But will the laws move fast enough?



