by Suraj Malik - 2 days ago - 3 min read
The UK government is proposing a tough new law that would force online platforms to remove intimate images shared without consent within 48 hours of being reported. Companies that fail to act quickly could face significant financial penalties and, in extreme cases, restrictions on operating in the country.
The proposal reflects growing concern in the UK about the rapid rise of deepfake abuse and image-based harassment.
Under the plan, tech platforms would have a legal duty to take down non-consensual intimate images within a strict 48-hour window after a valid report is filed.
The rule would apply to content used for:
Officials say the deadline is designed to mirror the urgency already expected for the removal of terrorist or extremist material online.
The proposal is not limited to social media. It would cover a wide range of services that host or distribute user-generated content, including:
Importantly, the rules explicitly include AI-generated images, acknowledging how generative AI tools have accelerated the spread of realistic deepfake abuse.
If the law is enacted, UK regulators would gain stronger enforcement powers.
Platforms that fail to comply could face:
The government is signaling that slow or inconsistent moderation will no longer be acceptable where intimate image abuse is involved.
Authorities argue the scale and speed of image-based abuse is increasing, particularly with the rise of AI tools that can generate realistic fake content.
Officials believe current reporting and takedown systems are often too slow, leaving victims exposed while harmful images continue circulating.
By imposing a clear 48-hour deadline, the government aims to force platforms to build faster detection, review and removal pipelines.
The proposal fits into a broader UK strategy to hold technology companies more responsible for online harms, especially those amplified by generative AI.
Regulators are increasingly moving from voluntary guidance toward hard legal obligations, particularly in areas involving:
For AI and social media companies alike, the direction of travel is clear: speed of response is becoming a regulated requirement, not just a best practice.
The UK’s proposed 48-hour takedown rule marks another escalation in efforts to combat non-consensual intimate imagery and AI-driven deepfakes. If implemented, platforms will face strict deadlines and serious penalties for failing to protect victims quickly.
The move underscores a broader shift in online regulation, where governments are demanding faster, more proactive action from tech companies as AI makes harmful content easier to create and distribute.