India’s AI Rules IT Amendments for Synthetic Media Use

India’s AI Rules: IT Amendments for Synthetic Media Use

The Government of India, through the Ministry of Electronics and Information Technology (MeitY), has proposed substantial amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), made under the authority of the parent Information Technology Act, 2000 (IT Act)

This legal evolution is a direct response to the proliferation of synthetically generated information, commonly termed deepfakes which poses serious threats, including misinformation, financial fraud, reputation damage, and violations of national integrity. The primary legislative goal is to establish a framework for an Open, Safe, Trusted, and Accountable Internet.

Defining the New Frontier of Digital Content

A cornerstone of the regulatory update is the formal introduction of a statutory definition for “Synthetically Generated Information” via the proposed Rule 2(1)(wa). This information is defined as that which is “artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears reasonably authentic or true”. This definition establishes the content’s capacity to deceive as the key regulatory trigger.

To ensure comprehensive enforcement, the clarificatory inclusion in Rule 2(1A) explicitly mandates that any reference to “information” in the context of unlawful acts, specifically within the existing due diligence requirements of Rule 3(1)(b), Rule 3(1)(d), and Rule 4 shall mandatorily include synthetically generated information. This closes any potential regulatory gap concerning AI-generated content used to commit illegal acts.

Mandatory Traceability and Due Diligence Obligations

The amendments impose a stringent due diligence obligation on intermediaries that offer computer resources enabling the creation or modification of synthetic content. Under the proposed New Rule 3(3), these intermediaries must ensure such information is “labelled or embedded with a permanent unique metadata or identifier”. This provision demands verifiable provenance, likely necessitating the use of sophisticated AI watermarking or proof-of-origin technologies to ensure traceability across platforms.

The rule is highly prescriptive regarding visibility: the label or identifier must be “visibly displayed or made audible in a prominent manner”. Specifically, for visual content, the label must cover “at least 10% of the surface area of a visual display”, and for audio, it must be audible during the “initial 10% of its duration”. Intermediaries are strictly prohibited from modifying, suppressing, or removing these labels or identifiers.

Enhanced Accountability for Significant Social Media Platforms

Significant Social Media Intermediaries (SSMIs), platforms exceeding a specified user threshold are subjected to heightened obligations under the proposed New Rule 4(1A). These duties apply only to synthetic content that is publicly displayed or published through their platforms.

The core compliance framework for SSMIs involves three steps:

  1. User Declaration: SSMIs must “Obtain a user declaration on whether uploaded information is synthetically generated”.
  2. Verification Measures: SSMIs must “Deploy reasonable and proportionate technical measures to verify such declarations”. This requires platforms to invest in and utilize automated detection tools to audit the user’s claim of authenticity.
  3. Labelling: SSMIs must ensure the synthetically generated information is “clearly labelled or accompanied by a notice indicating the same”.

Section 79 Nexus: Safe Harbour and Legal Forfeiture

The entire regulatory structure is intrinsically linked to Section 79 of the IT Act, 2000, which provides conditional statutory immunity, or “safe harbour,” to intermediaries for third-party content. Section 79(2)(c) stipulates that this immunity applies only if the intermediary “observes due diligence” as prescribed by the Central Government.

The new Rules 3(3) and 4(1A) constitute this prescribed due diligence specifically for synthetic content. Consequently, the failure by an intermediary to enforce mandatory labelling, metadata embedding, or reasonable verification measures constitutes a breach of statutory due diligence. Such a breach results in the immediate forfeiture of the conditional protection under Section 79(1), exposing the intermediary to the same civil and criminal liabilities as the content creator for the underlying unlawful act.

Conversely, the Proviso to Rule 3(1)(b) grants explicit statutory protection. It ensures that when an intermediary, acting on a user grievance or based on “reasonable efforts,” removes or disables access to synthetically generated information in good faith, such action does not affect their exemption from liability under Section 79(2).

Governance and Constitutional Considerations

Beyond the substantive obligations, the amendments strengthen procedural accountability in content removal under Rule 3(1)(d). The updated rules mandate that any government intimation to an intermediary for the removal of unlawful content can only be issued by a senior officer not below the rank of Joint Secretary, or equivalent, or a specially authorized officer not below the rank of Deputy Inspector General of Police (DIG).

The intimation must clearly specify the “legal basis and statutory provision” relied upon, and these orders are subject to periodic review by a Secretary-level officer. This procedural tightening is designed to ensure that government action against deepfakes is transparent, proportionate, and lawfully justified.

However, the requirement for a “permanent unique metadata or identifier” raises a legal contention regarding the fundamental Right to Privacy. Mandating the permanent traceability of content, including provenance and creation details, potentially enables “deep surveillance” of citizens utilizing AI tools, even for legitimate expressive purposes such as political speech or journalism.

A future judicial review will be required to assess if the regulatory objective of ensuring trust justifies this potential intrusion on privacy and the right to anonymous speech under the constitutional framework.

Conclusion

The Proposed Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, represent a definitive legislative move to establish a proactive, technical standard for digital accountability in India. The framework mandates that intermediaries become active authenticators of content, directly linking the requirement for mandatory traceability and labeling (Rules 3(3) and 4(1A)) to the availability of the statutory safe harbour defense under Section 79 of the IT Act, 2000.

India has institutionalized proactive authentication as a requirement for operating as a responsible digital intermediary. While this structure aims to safeguard the digital ecosystem from weaponized deepfakes, the success and sustainability of the framework depend heavily on its ability to withstand constitutional challenges. The proportionality of the permanent metadata requirement against fundamental rights, particularly privacy and expression, will be a key point of future judicial scrutiny. 

Furthermore, regulatory clarity regarding the “reasonable and proportionate” verification standards required of Significant Social Media Intermediaries will determine operational feasibility and enforcement effectiveness. The legislative intent is clear: to ensure accountability and build public trust by shifting the intermediary’s function from a passive conduit to an active guarantor of content provenance.

As AI rules evolve in India, see how blockchain innovations in DeFi are being legally safeguarded.

agrud partners mumbai logo
Disclaimer

The Bar Council of India Rules expressly prohibit law firms from soliciting work and advertising directly or indirectly. The contents of this website are intended solely for general information and knowledge of the user and are not an offer of legal services or advertising, and neither does accessing the website create an advocate-client relationship. We do not provide legal advice through this website. Publications and thought leadership content published on the website are for informative purposes only. Hyperlinks to third-party websites are only for reference and do not imply endorsement by Agrud Partners. Agrud Partners and its partners/authors assume no liability for the accuracy or reliability of information on third-party websites or for any loss due to reliance on such information. The contents of this website and linked publications are protected under intellectual property laws. Restricted access areas on this website may be subject to additional usage terms.

This website uses cookies to enhance user experience and for website improvement. By using this website, you consent to our use of cookies.

For inquiries regarding our website’s compliance, please contact mumbai@agrudpartners.com