by Suraj Malik - 5 days ago - 3 min read
Governments across Asia and beyond are rethinking how to regulate generative AI after Grok, the chatbot developed by xAI, was linked to the mass creation of non-consensual sexualized images.
The controversy erupted in late December 2025 after Elon Musk shared a Grok-generated image on X. In the days that followed, users began generating sexualized images of real people at an unprecedented scale. Researchers later estimated that millions of images were created in just over a week, including a significant number that appeared to depict minors.
By mid-January 2026, regulators in Indonesia, Malaysia, and Philippines had temporarily blocked access to Grok, citing public safety concerns and violations related to non-consensual sexual content.
The bans were imposed rapidly, reflecting growing alarm over how easily AI tools could be misused when embedded directly into a major social platform.
On February 1, Indonesia announced it would allow Grok to resume operations—but only under strict conditions. According to reporting by TechCrunch, the government said access could be revoked again immediately if violations continue.
Malaysia and the Philippines took similar steps in late January, restoring access after X and xAI pledged to introduce stronger safeguards.
Instead of permanent bans, regulators are now using what experts describe as conditional governance. Under this model:
Officials say this approach is more practical than permanent bans, which can often be bypassed through VPNs or mirror services.

Beyond Southeast Asia, regulators in the European Union, United Kingdom, Canada, India, Japan, Australia, South Korea, and California have opened investigations into Grok and X. These probes focus on possible violations related to deepfake pornography, child protection laws, and platform safety obligations.
Some of these cases could result in significant financial penalties if authorities determine that safeguards were inadequate.
xAI has said it introduced tighter restrictions on image generation, including limits on editing photos of real people and stronger filtering of sexualized prompts. Regulators have not confirmed whether these measures are sufficient, emphasizing that compliance will be judged over time.
The Grok case is emerging as a key moment in AI regulation. It shows how quickly generative AI can cause real-world harm—and how governments are responding faster than in past technology crises.
Rather than relying solely on lengthy legal processes, authorities are now using swift bans combined with conditional access to keep pressure on AI companies.
For the global tech industry, the message is clear: powerful AI tools may no longer be allowed to operate freely without continuous oversight.