
While a few utmost features appear to be their possession, artificial intelligence (AI) chatbots are expected to receive caught off guard by hallucinations or glitches, and back over the past few weeks xAI’s Grok chatbot got affected by a bug, leading to remark on the "genocide against white citizens" in South Africa.
Elon Musk-owned AI company, an unauthorised modification to Grok’s software systems, prompted it to highlight the politically controversial topic, after the company committed to acknowledging the issue at the earliest to avoid hurting individuals’ sentiments.
Violation of monitoring protocols
Taking to X (formerly Twitter) that the change occurred on Wednesday after evading the standard monitoring system, letting Grok comment on an extremely sensitive topic, resulting in the violation of xAI’s internal policies, according to Reuters.
Taking to X, Grok users shared screenshots of their interactions with Grok. The screenshots showed the AI tool outlining the controversial topic of "white genocide" even amid off-topic conversations.
Following its revelation, the unexpected AI glitch has ignited intense debates on political bias, and the precision of AI chatbots.
Musk and South Africa's land acquisition policy, including Musk, described these remarks a discrimination against white citizens, while the South African government refuted the evidence of discrimination.
As a response, xAI is reportedly gearing up to publish Grok's system prompts on GitHub, enabling the public to offer feedback.