Technology

3-Hour Takedown Rule: India Tightens Grip on AI Content Ahead of 2027 Elections

by Vivek Gupta - 1 week ago - 4 min read

India has introduced sweeping new rules to regulate artificial intelligence-generated content, imposing mandatory labelling requirements and a strict three-hour takedown window for unlawful material. The amendments, notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, will come into effect on February 20, 2026.

The changes modify the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and directly target synthetic and AI-generated content, referred to as “synthetic generated information” (SGI). The move follows rising concerns around deepfakes, impersonation fraud, and misinformation, particularly as the country prepares for the 2027 Bihar elections.

Mandatory labelling of all AI-generated content

Under the new framework, social media platforms must clearly label AI-generated or AI-altered audio, video, and image content. For visual media, labels must occupy at least 10 percent of the screen. For audio and video, the label must appear for at least the first 10 percent of the duration.

In addition to visible disclosures, platforms are required to embed persistent, non-removable metadata that includes unique identifiers traceable to the content’s origin. Users uploading content must declare whether it is AI-generated or modified, and platforms are expected to verify these declarations using automated detection systems.

The rules prohibit the removal or suppression of these labels and metadata. Platforms must also deploy tools capable of identifying illegal or deceptive synthetic content.

The definition of SGI includes AI-generated or altered material that is indistinguishable from real content, including deepfakes and other synthetic media.

Centre fixes three-hour deadline for social media platforms to take down  flagged AI content

Three-hour takedown mandate

Perhaps the most consequential change is the reduction of the content removal timeline. Platforms must now take down flagged unlawful AI content within three hours of receiving notice. Previously, intermediaries had up to 36 hours to act.

The accelerated rule applies to:

  • Deepfakes and misleading synthetic media
  • Child sexual abuse material
  • Non-consensual intimate content
  • Fabricated events intended to mislead
  • Content related to explosives or serious harm

Failure to comply risks the loss of “safe harbor” protection under Section 79 of the IT Act, exposing platforms to greater legal liability.

What platforms must now implement

Significant social media intermediaries, defined as platforms with over 5 million users in India, face enhanced due diligence obligations. This includes Instagram, YouTube, Facebook, X, TikTok, and other large services.

Platforms must:

  • Apply visible AI labels to all synthetic content
  • Embed persistent traceable metadata
  • Deploy automated verification tools for user declarations
  • Issue quarterly warnings to users about penalties for AI misuse
  • Remove flagged unlawful content within three hours

Industry bodies such as IAMAI had earlier raised concerns that overly broad definitions could stifle innovation. The final rules are reportedly narrower, focusing on misleading or harmful synthetic content rather than all AI usage indiscriminately.

What changes for users

For content creators and everyday users, the most immediate shift will be mandatory AI disclosures. Anyone uploading altered or AI-generated content must declare its nature, triggering automatic labelling and metadata embedding.

For consumers, the rules are designed to make deepfakes easier to identify. Permanent labels and visible markers aim to reduce confusion and curb manipulation.

Users will also receive periodic reminders about penalties associated with misuse of AI-generated content. Violations, particularly involving fraud, impersonation, or non-consensual material, may attract prosecution under existing laws.

Why the government moved now

The amendments follow a surge in deepfake incidents involving political figures, celebrities, and private citizens. Concerns intensified with reports of impersonation scams and synthetic media used in harassment campaigns.

The Election Commission has already mandated AI campaign labelling in the run-up to the Bihar elections, and the new rules align with that directive. Officials have framed the amendments as proportionate safeguards intended to balance innovation with accountability.

India’s approach also mirrors global developments. The European Union’s AI Act includes transparency provisions for synthetic content, while several U.S. platforms have adopted voluntary labelling practices.

A short compliance window

With the rules taking effect on February 20, platforms have just ten days to implement the required systems. Technical challenges include embedding non-removable metadata, detecting synthetic content reliably, and scaling automated moderation to meet the three-hour removal requirement.

The coming weeks will test how effectively large platforms can operationalize these changes without disrupting user experience.

For now, the message from New Delhi is clear: synthetic media will no longer operate in a grey zone. Transparency and speed are now mandatory, and the burden of enforcement rests squarely on digital intermediaries.