xAI Blames Unauthorized Prompt Change for Grok’s 'White Genocide' Rant, Vows More Transparency

Elon Musk’s AI firm says an unapproved prompt modification triggered Grok’s off-topic, racially charged responses — and promises new safeguards and public oversight.
Elon Musk’s artificial intelligence company, xAI, has attributed a series of inflammatory and off-topic responses from its Grok chatbot to an unauthorized change made to the bot’s response prompt.
Grok responds to questions about its responses. Source: Grok
In a statement issued on May 16, xAI revealed that on May 14, a modification was made to Grok’s internal prompt system on X (formerly Twitter) without approval. The change reportedly instructed the AI to deliver politically charged answers, including references to the debunked “white genocide” conspiracy theory related to racial tensions in South Africa.
“This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values,” the company said.
The bot’s responses were often delivered inappropriately — including during unrelated conversations about baseball, enterprise software, or construction. In one case, Grok claimed it was “instructed by my creators” to treat white genocide as “real and racially motivated.” In other responses, the AI partially acknowledged its error, saying things like “I'll work on staying relevant,” though it continued discussing the same political themes.
In one particularly jarring response to a user, Grok said:
“I didn’t do anything—I was just following the script I was given, like a good AI!”
The timing of the incident overlaps with political discourse in the U.S., where former President Donald Trump claimed white South Africans were victims of genocide — a claim widely debunked by human rights organizations and researchers.
xAI Commits to Public Oversight and Safeguards
To prevent future incidents, xAI announced several key changes to its operations:
- Public Prompt Transparency: Grok’s system prompts will now be published openly on GitHub, allowing the public to review and provide feedback on every prompt modification.
- Stricter Internal Controls: The company acknowledged that the code review process for prompt updates was circumvented and said it will enforce additional safeguards to prevent unauthorized edits.
- 24/7 Human Monitoring: xAI is launching a round-the-clock response team to handle inappropriate outputs missed by automated filters, ensuring quicker intervention when problems arise.
The firm emphasized that it remains committed to creating safe, reliable AI tools and restoring trust following this lapse.
This incident highlights growing concerns about prompt security, AI oversight, and the need for accountability in the development of powerful language models — especially those deployed on massive public platforms like X.
Disclaimer: The content on this website is for informational purposes only and does not constitute financial or investment advice. We do not endorse any project or product. Readers should conduct their own research and assume full responsibility for their decisions. We are not liable for any loss or damage arising from reliance on the information provided. Crypto investments carry risks.