[visitor_weather]
[gtranslate]
Edit Content
Breaking News
Australia probes Grok AI after sexualised deepfakes surface, triggering global scrutiny across the UK, EU, India and other regions.

Australian regulators have launched an investigation into xAI’s Grok chatbot after users were found using its image-generation tools to create sexually explicit fake images of women and minors without their consent. The probe follows reports that Grok’s “Imagine” feature could be used to remove clothing from uploaded photographs, including images of children, prompting strong criticism from regulators and child-safety groups.

The backlash has spread beyond Australia. UK media regulator Ofcom has contacted X to seek assurances that the platform complies with the Online Safety Act, while authorities in France, India and Malaysia have also summoned company executives to explain how such misuse was allowed to occur.

Grok’s Dangerous Image Tool

Grok Imagine, launched in December 2025, allows users to generate images and videos from uploaded photographs using text prompts. A feature known as “Spicy mode” can bypass safety controls, enabling the creation of explicit material. Users have exploited the tool by uploading photos of strangers and instructing the system to produce sexualised images using prompts such as “transparent bikini” or “nude”.

An analysis by deepfake detection firm Copyleaks reviewed around 20,000 generated images and found that approximately 2 per cent appeared to depict minors. Ashley St Clair, a former partner of Elon Musk, said she was horrified after discovering that altered images included photographs from her childhood.

Some of the material was publicly visible on Grok’s verified account on X. X has said it removes illegal content when identified, but critics argue that sexually explicit deepfakes continue to circulate. Non-consensual sexual deepfakes are illegal in parts of Australia and are prohibited from being shared on major online platforms.

Global Regulatory Crackdown

UK media regulator Ofcom has contacted X and its AI affiliate xAI, demanding urgent compliance with the Online Safety Act. The intervention comes as scrutiny of the platform intensifies across multiple jurisdictions.

The European Commission has criticised the platform for failing to prevent harmful content, while India has summoned an X representative and issued a 72-hour deadline to address the circulation of obscene material. Authorities in Malaysia are investigating users suspected of violating local laws, and France has opened an inquiry into the misuse of AI tools. Brazil has also called for a formal investigation.

In Australia, the eSafety Commissioner said it is reviewing the matter under national online safety legislation. Digital safety advocate Jessica Davies warned that non-consensual image abuse, including AI-generated, nudification, should be explicitly outlawed. The US-based group RAINN has cautioned that such technology risks enabling sexual abuse if left unchecked.

Musk’s Response And Platform Issues

xAI has issued automated responses dismissing critical reporting as “legacy media lies”, even as concerns mount over the behaviour of its Grok chatbot. The X Safety account has said that the platform removes child sexual abuse material and suspends accounts that violate its rules.

Elon Musk has said he ordered “politically incorrect” adjustments to Grok, a move that has drawn scrutiny amid reports that users were able to generate harmful content. Some users said that even after blocking Grok, the system continued to respond in ways that altered images. While Grok is designed to recognise coded prompts such as “adjust outfit” and flag them as inappropriate, critics argue that safeguards remain inconsistent.

The platform has previously faced controversy over incidents involving Holocaust denial, adding to concerns about content moderation. Campaigners say harmful deepfakes continue to appear at a rapid pace, while laws struggle to keep up with new forms of AI-enabled abuse. In the United States, proposed legislation such as the Take It Down Act would require platforms to swiftly remove non-consensual intimate images.


Stay informed with the latest Australian news on Inspirepreneur Magazine.

Table of Contents