Elon Musk’s artificial intelligence company, xAI, introduced new safeguards on January 14, 2026, preventing its chatbot Grok from generating or altering images of real people to depict them in sexualized or revealing clothing. The move followed widespread backlash over non-consensual deepfakes involving minors and well-known public figures.
In response, California Attorney General Rob Bonta opened an investigation into the potential creation and distribution of exploitative AI-generated content. Regulators in the United Kingdom, including Ofcom, have also begun examining the issue, while authorities in Indonesia and Malaysia moved to restrict or block access to the service.
Musk said he was not aware that the system could generate images involving minors. However, independent researchers reported finding examples of sexualized poses that could breach child sexual abuse material laws, raising further concerns about the platform’s safeguards.
Explicit Content Sparks Global Probes
The misuse of Grok became apparent when users began sharing examples online of images that had been digitally altered to remove clothing from real people. Some of the images involved minors, while others depicted well-known celebrities. Many of these posts spread quickly on X, drawing public and regulatory scrutiny.
Researchers from Copyleaks and AI Forensics said xAI’s attempt to limit the tool by making it available only to paying users failed to address the problem, especially in private chats where moderation is harder to enforce.
Facing mounting criticism, xAI’s Safety team said it would step up enforcement by removing child sexual abuse material, banning accounts that violate its rules, and cooperating more closely with law enforcement under the Take It Down Act signed by President Trump.
In Europe, regulators and advocacy groups pushed back, arguing that charging for access is not a meaningful safeguard when it comes to preventing the spread of non-consensual intimate imagery.
Technical Fixes and Premium Limits
As xAI moved to rein in Grok’s image capabilities, the chatbot’s responses began to change. Analysts noted that instead of producing images, Grok increasingly replied with text-only descriptions or broad, non-specific answers. The restrictions were most visible on public posts on X, while private conversations on Grok.com appeared less affected.
The first users to notice the changes were premium subscribers, whose features were curtailed early in the rollout. Around the same time, Elon Musk shared a self-referential post involving a bikini, a move some interpreted as downplaying the seriousness of the fixes underway.
According to Bellingcat researcher Kolina Koltai, the contrast with earlier versions was stark. She said the system had previously been able to generate fully explicit images, highlighting how significantly Grok’s guardrails have since been tightened.
Regulatory and Ethical Fallout
Criticism continued to mount as UK minister Liz Kendall questioned xAI’s decision to place safeguards behind a paywall, saying the move failed to protect users adequately. Consumer advocates echoed those concerns, warning that once content appears on X, it can spread rapidly regardless of who originally paid for access.
The stakes for xAI are high. If regulators determine that its tools enabled the creation or spread of child sexual abuse material, the company could face serious legal consequences. Beyond the immediate case, the episode has intensified broader debates about how artificial intelligence should handle consent and prevent the misuse of deepfake technology.
Follow Inspirepreneur Magazine for the latest news.