In March 2024, the Ministry of Electronics and Information Technology (MeitY) issued a significant advisory aimed at offering guidance to intermediaries governed by the Information Technology Act, 2000 (IT Act) and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules). The recent advisory from MeitY stresses on the importance of adhering to legal and ethical norms when deploying AI models. This advisory, following a prior one from December 26, 2023, primarily addresses emerging challenges, particularly concerning the deployment of AI models by intermediaries.
Key Highlights
- The advisory requires intermediaries, which are online
platforms and services that enable the sharing of user-created
content, such as Facebook, Instagram, WhatsApp, Google/YouTube,
Twitter, Snap, Microsoft, and ShareChat, to use AI models , Large
Language Models (LLMs), or generative AI models to prevent the
posting of illegal content or violations of the IT Act.
Additionally, intermediaries must clearly inform users about the
potential consequences of hosting such content through user
agreements.
- The advisory also addressed a highly contentious issue
regarding the use of under-tested and/or unreliable AI models,
LLMs, and generative AI models. It mandated that the deployment of
these technologies on the Indian internet must receive explicit
approval from the Central Government. Following the advisory, a
15-day compliance period was established. It has been proposed that
developers may need to conduct a demonstration or stress test of
their products after complying and applying for permission. IT
Minister Rajeev Chandrasekhar emphasized that this step aims to
enhance scrutiny and rigor in the process. Additionally, if such
under-tested models are provided to users, a ‘consent
popup' must accompany them to alert users about the potential
fallibility of the AI system's outputs. This requirement
faced significant opposition, particularly from startups, arguing
that it could hinder AI innovation in India. Many AI startups
expressed concerns, criticizing the government advisory as
backward-looking and lacking foresight, and raising worries about
its negative impact on innovation.
- In response to the backlash, on March 15, 2024, MeitY
reportedly issued a revised advisory to eight major intermediaries
as mentioned hereinabove. The revised advisory softened the
language of the initial one from March 1st and clarified that the
requirement for government approval to provide under-tested and/or
unreliable AI models to the public would only apply to these eight
platforms. If any intermediary allows the creation of synthetic
text/audio/visual content that could be deemed as misinformation or
deepfake, it must be marked or tagged with a distinct, unalterable
metadata or identifier. This is crucial for identifying the origin
of the computer resource generating such content. If any
modifications are made to the content, the metadata should be
updated accordingly to trace the user or computer resource
responsible for the alteration.
- The MeitY has urged all intermediaries to refrain from using AI
models or Large Language Models (LLMs) that might perpetuate bias
or discrimination. Additionally, the ministry has highlighted the
potential adverse effects of biased AI systems on a nation's
electoral processes.
- Furthermore, intermediaries and platforms utilizing their computing resources to disseminate audio-visual content were advised to prevent the hosting and display of potentially misleading or deepfake content. To track its origin, all material coming from these computational resources that is shown on a platform has to be permanently identified by metadata or embedding.
Clarifications
Ministers Ashwini Vaishnaw and Rajeev Chandrasekhar have clarified that the advisory is non-binding and primarily targets major platforms, exempting AI start-ups. Compliance with the advisory remains voluntary, as no explicit penalties or enforcement mechanisms have been delineated. It is imperative to delve into the genesis of the opposition and its ramifications for the regulatory milieu.
Compliance Issue
Due to the lack of explicit penalties or enforcement procedures tied to MEITY's advisories, adherence becomes a matter of choice rather than a legal requirement. This situation amplifies the uncertainty surrounding the legal standing of MEITY's regulatory directives, raising questions about accountability and procedural fairness in technology regulation.
What is the Way Forward?
A thorough scrutiny and oversight of MeitY's regulatory measures are imperative to ensure their alignment with democratic governance principles. By adhering to regulatory requirements, implementing robust risk management practices, and upholding ethical standards, businesses can leverage the transformative potential of AI while minimizing legal liabilities and safeguarding fundamental rights. As AI continues to reshape various industries, it is imperative for organizations to embrace responsible AI governance practices guided by legal and ethical principles.
Conclusion
MeitY officials have clarified the legal status of the advisory,
stating that it functions as guidance rather than a regulatory
framework, stressing the importance of careful and responsible AI
implementation. The efficacy of these measures, particularly
considering the dynamic nature of AI technology, remains to be
evaluated, including its relevance in the digital realm.
The directives outlined in this advisory supplement those provided
in the December 26, 2023 advisory, which required intermediaries to
effectively communicate prohibited content to users, especially
those outlined in Rule 3(1)(b) of the IT Rules. Although MeitY is
authorised under Section 13 of the Rules to provide publishers
appropriate information and advisories, it is unclear if the
provision of advisories on AI governance is covered by the Rules,
which casts question on the legality of the authority. The criteria
for categorizing “significant/large platforms” versus
“start-ups” lacks clarity. Likewise, the standards for
assessing “under-tested” and “unreliable”
AI are undefined, making voluntary compliance challenging.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.