The Government of India has significantly tightened the rules governing online content, introducing amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 that come into effect on 20 February 2026.
Under the new framework, all AI‑generated, synthetically altered or deepfake content must be clearly and prominently labelled, and digital platforms including Facebook, Instagram, YouTube and X must remove any unlawful or harmful AI content within three hours of receiving a government or court order a dramatic reduction from the previous 36‑hour window.
The rules also mandate persistent metadata or identifiers to trace the origin and nature of synthetic content, user declaration mechanisms and automated safeguards against unlawful use. Government officials say the move will enhance online transparency and curb misinformation and abuse, while technology companies and rights advocates have raised concerns about feasibility, free speech and industry consultation.
New Compliance Norms, Labels and Deadlines
The amended rules introduce, for the first time, a statutory definition of “synthetically generated information (SGI)”, which covers any audio, visual or audio‑visual content created or altered using artificial intelligence or algorithms that appears real or authentic. Routine edits such as colour correction or accessibility improvements are not considered SGI, nor are benign research, educational material or illustrative content that does not mislead users.
Under the new obligations, every piece of AI‑generated or altered content must carry a clear, prominent disclosure label visible to users. Platforms must also embed permanent metadata or unique identifiers wherever technically feasible to facilitate traceability of the content’s origin and generation mechanism. Importantly, intermediaries are barred from enabling the removal, suppression or alteration of these labels or identifiers once applied.
To ensure compliance, platforms will be required to obtain user declarations at the time of upload about whether the content is AI‑generated, and deploy automated or proportionate technical tools to verify these declarations before publishing the material online.
A senior official at the Ministry of Electronics and Information Technology (MeitY) noted that “these changes are designed to make synthetic content transparent and responsibly managed, reducing the risk of public deception and harm.” Although specific quotes were limited in official releases, government statements emphasise accountability alongside technological safeguards.
In addition to labelling requirements, the rules shorten content takedown timelines drastically: platforms must act on lawful government or court orders within three hours, down from the previous 36 hours. Other grievance redressal deadlines have also been tightened, with user complaint response periods reduced from 15 to seven days and certain removal actions required within as little as two hours.
A Digital Safety Push Amid Rising Deepfakes
The amendments follow growing global and domestic concern over the misuse of generative AI to produce deepfakes, fabricated audio, manipulated images and audiovisual material that can impersonate real individuals, distort events or mislead audiences for malicious purposes. While AI offers significant creative and productivity benefits, the ease of generating synthetic content that appears genuine has alarmed policymakers, civil society and digital rights groups alike.
The government has highlighted risks linked to fraud, defamation, impersonation, electoral manipulation, non‑consensual intimate imagery, child sexual abuse material and other unlawful uses of synthetic media. The amended rules explicitly treat such harmful AI content on par with other illegal online activity, subjecting it to swift takedown and legal enforcement.
The decision to compress the removal window has sparked debate. Proponents argue that misinformation and harmful deepfakes can spread widely within minutes, requiring rapid response mechanisms.
However, digital rights advocates and legal experts caution that a three‑hour deadline may be impractical, especially for smaller platforms or independent developers, and could lead to excessive content removal or inadvertent infringement on free expression if not paired with clear safeguards and appeal processes. They also note that India’s approach is now among the more assertive content regulation regimes globally, diverging from longer timelines typical in other jurisdictions.
Tech industry stakeholders have also raised concerns about limited consultation and technical feasibility. Representatives of major platforms have reportedly urged additional time and clearer technical standards for verification tools, metadata requirements and integration with existing moderation systems.
Industry feedback influenced some final rule elements, such as dropping an earlier proposal to mandate that AI labels cover at least 10 per cent of visual content display a measure that was later softened in response to implementation challenges.
The Logical Indian’s Perspective
At a time when digital technologies are reshaping communication, creativity and social interaction, the rise of AI‑generated content presents both immense opportunities and grave risks. These amended IT rules represent a proactive effort to protect users from deception, exploitation and harm without stifling innovation.
However, we must ensure that regulatory frameworks protect freedom of expression, uphold due process, and remain adaptable to evolving technologies. Responsible platforms, vigilant civil society and informed citizens must work together with policymakers to create a digital ecosystem that prioritises truth, dignity and accountability.











