The UK’s data protection regulator has opened a formal investigation into Grok, a generative artificial intelligence tool developed by xAI and used on the social media platform X, over concerns related to sexualised deepfake imagery and the use of personal data. The inquiry was confirmed by the Information Commissioner’s Office (ICO) on 3 February 2026, following mounting regulatory scrutiny of generative AI systems.
What Triggered Regulatory Action
The investigation was first reported by The Guardian, which disclosed that the ICO had begun examining whether Grok’s outputs and underlying data practices complied with UK data protection laws. The report highlighted concerns that the AI system had been used to generate sexualised images of real individuals without consent, prompting official intervention.
The ICO said it is investigating both X Internet Unlimited Company, which operates X in the UK, and xAI, the company responsible for developing Grok. The regulator is assessing whether personal data was processed lawfully, fairly, and transparently, and whether adequate safeguards were in place to prevent the generation of harmful or unlawful content.
According to the ICO, the inquiry will examine how the AI model was trained, how user prompts are handled, and what measures exist to limit misuse, particularly where content may involve non-consensual or sexualised imagery.
Regulatory Concerns Around AI-Generated Content
UK regulators have increasingly warned that generative AI tools can pose significant risks when deployed without strong controls. The ability of such systems to create realistic images and text raises questions about privacy rights, consent, and accountability, especially when content involves identifiable individuals.
The ICO stated that organisations developing or deploying AI systems remain subject to existing data protection rules and must demonstrate that they have taken appropriate steps to mitigate foreseeable risks.
If the investigation finds violations of the UK General Data Protection Regulation (GDPR), the ICO has the power to impose enforcement measures, including fines of up to £17.5 million or 4% of global annual turnover, whichever is higher. The regulator may also require changes to how the AI system operates.
Wider Regulatory Context
The ICO’s probe runs alongside broader scrutiny of X in the UK and Europe. Ofcom, the UK’s communications regulator, is conducting a separate investigation into whether the platform has met its obligations under the Online Safety Act, particularly in relation to harmful and illegal content.
Authorities in other European countries have also examined the risks posed by AI-generated deepfakes, reflecting growing international concern over the rapid spread of generative technologies.
The ICO will gather information from the companies involved, assess technical and governance safeguards, and determine whether further regulatory action is required. The investigation does not have a set timeline, but its outcome could influence how generative AI tools are regulated in the UK in the future.
Key Highlights
- UK Information Commissioner’s Office opens a formal investigation into Grok AI.
- Probe focuses on sexualised deepfake imagery and personal data handling.
- Investigation covers xAI and X Internet Unlimited Company operations.
Follow Inspirepreneur Magazine for the latest Australian breaking news.