Introduction

'Deepfake' is a term derived from the combination of "deep learning" and "fake." It refers to synthetic media created using deep learning techniques, involving the production of digital counterfeits. Deepfakes are capable of producing completely original content as well as manipulating text, pictures, audio, and video. They might be used in combination with cybercrime operations to smear targets or to impersonate or blackmail political authorities. Most importantly, it is important to note how much it can disrupt a person's mental sanity and threaten that individual personally.

This deepfake technology can cause some serious repercussions, such as the spread of false information, harm to one's reputation, and a decline in confidence. Deepfakes are a serious threat to the veracity of digital information because of their capacity to produce convincingly fake stuff. They can be used to propagate misleading information, sway public opinion, and thwart the democratic process. Particularly famous people fear the danger of having their reputations damaged by deepfakes that purport to show them participating in harmful and fictitious behaviors; for example, in recent times, a viral deepfake video featuring the famous actress Rashmika Mandanna has surfaced across various social media platforms, sparking heightened concerns among the public. This incident has brought to the forefront the unsettling reality of how a video can be manipulated to such an extent that differentiating between the real video and the deepfake becomes an arduous task.

Contemporary Developments in the World

On December 9, 2023, the European Union Parliament reached a provisional agreement with the Council on the AI Act. This legislative development outlines a comprehensive framework comprising four distinct risk categories: minimal or no risk, limited risk, high risk, and unacceptable risk. The AI Act seeks to systematically address potential risks associated with artificial intelligence across these specified risk levels. This AI Act would adopt a risk-based approach, classify deepfakes as "limited risk," and accordingly provide minimum transparency obligations to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes, like satire.

The law-making procedure began in 2021; however, the progress of generative AI like chat GPT became a game changer in the realm of new innovations. These AI systems can create tax translation pitches in seconds, but looking at its darker side, it comes with even graver risks. Among those backdrops is deepfake technology, which is raising concerns with each passing day. To address this very concern, lawmakers in the EU decided to create a legal framework for generative AI specifically. It is the first text of its kind in the world.

The process took longer than expected because when we talk about innovation and its regulations, laws struggle to keep up with technological advances, and they fall back. However, with these laws, states like Germany and France raised valid concerns, fearing that too much regulation could hinder innovation and hurt emerging AI tycoons. For instance, the French company Mistal AI is one of those emerging AI models, and it has become very important for such companies as all regions in the world are engaged in this race for AI today. Now, the upcoming law is undoubtedly a step forward, and it is historic indeed, but it still needs to be there. The law still needs to be formally approved by member states and the Parliament, and hopefully, there will be a new development in these AI laws until then.

Legal Frameworks in India

India lacks any specific laws that govern deep fake; however, there are a plethora of laws that can collectively govern deep fake to an extent. Rule 7 of the f the Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021, some relevant provisions of the Information Technology Act, 2000 also govern deep fakes to an extent. For instance, Section 79(1) of the IT Act shields online intermediaries from liability related to third-party information, data, or communication links on their platforms. Simultaneously, Rule 7 of the IT Rules enables aggrieved individuals to take legal action against platforms under the Indian Penal Code provisions. This legal framework maintains a balance, exempting intermediaries from direct responsibility for user-generated content while providing a legal avenue for individuals harmed by online content to seek recourse.

Section 66E of the IT Act stipulates penalties for infringing on an individual's privacy by publishing or transmitting images of their private area without consent. The punishment includes a three-year imprisonment term along with a fine of INR 2 lakh. This legal provision aims to deter and penalize unauthorized disclosure of private images, safeguarding individuals from privacy violations in the digital realm. However, the legal dilemma involved with deepfakes is whether this section could be attracted when the images generated are entirely fake and are developed with the help of such generative AIs. The law specifically falls on a shorter step here.

Sections 67, 67A, and 67B of the IT Act explicitly forbid and establish penalties for the publication or transmission of obscene material, material featuring sexually explicit acts, and depictions of children engaged in sexually explicit acts in electronic form, respectively. These legal provisions serve to curb the dissemination of inappropriate and harmful content, ensuring that individuals engaging in such activities face legal consequences.

In 2023 a new act also came into the picture; this was a small effort from the government to tackle the ongoing issues. It puts certain obligations on data fiduciaries, which include obtaining prior permission from the data principal itself before processing the data. The act defines data principal as the individual to whom the personal data relates. This is certainly a progressive provision, which puts liability on the AI models to obtain the permission of the data principal but this looks good only in theory as it is difficult to implement practically.

The Stance of Judiciary

The Delhi High Court has extended the two-week deadline for the government to reply to a Public Interest Litigation (PIL) that raises concerns about the absence of regulatory frameworks overseeing Artificial Intelligence (AI) and deepfake technologies in India. This PIL was filed by the advocate Chaitanya Rohilla via advocate Manohar Lal, who is urging the Central government to take action by imposing restrictions on websites and artificial intelligence platforms hosting deepfake content.

The Acting Chief Justice Manmohan and Justice Manmeet PS Arora, recognizing the extensive nature of the issue, expressed that, "Given the substantial dimensions of this matter, we believe the Union of India would be best suited to formulate regulations. Let the Union of India consider this matter initially." While this is not the first instance, the Delhi High Court has previously sought the government's position on the same Public Interest Litigation (PIL), underscoring the importance of comprehending the significance of technology and its potential benefits on 4th December 2023.

Government's response

Union Minister Rajeev Chandrasekhar has expressed concerns over the mixed compliance with the advisory on deepfakes by social media and online platforms. He announced that amended IT rules addressing the issue of deepfakes and misinformation will be notified within a week. Chandrasekhar emphasized that platforms bear the responsibility of detecting and removing deepfakes and prohibited content and warned of potential blocking for failure to fulfill this responsibility. The government's firm stance follows instances of 'deepfake' videos targeting public figures, sparking public outrage. Chandrasekhar highlighted that the amended rules will embed the advisory's guidelines more explicitly, ensuring compliance with regulations regarding misinformation and deepfakes. The government is determined to address the challenges posed by deepfakes, signaling potential consequences for non-compliance, including platform blocking to prevent harm to users.

Conclusion and recommendations

An interdisciplinary strategy is necessary to tackle the issues raised by deepfake technology in India. The adoption of certain laws, such as the AI Act of the European Union, ought to be taken into consideration first. Legal guidelines for producing and sharing harmful information must be included in this law, together with regulatory frameworks for deepfake and artificial intelligence technologies. The Information Technology Act and the Digital Personal Data Protection Act of 2023 are two examples of current laws that have made their marks in curtailing the issue but these provisions also need to be amended concurrently in order to specifically include provisions pertaining to deepfakes. This guarantees that the legal framework is thorough and keeps up with the latest developments in technology.

Furthermore, platform responsibility needs to be increased in order to fortify the reaction against the spread of deepfakes. The pledge made by Union Minister Rajeev Chandrasekhar to change IT regulations and hold platforms directly accountable for the prompt detection and elimination of deepfakes is a step in the right direction. Nonetheless, there is a need for the Indian legal system to be improved, especially when it comes to clearly identifying offenses linked to deepfake content and adjusting to the quickly changing technological environment. To effectively mitigate the risks associated with deepfakes and preserve the integrity of digital content and privacy in the digital age, collaboration between legal measures, judicial actions, public awareness initiatives, and technological solutions will be imperative as the legal and regulatory landscape evolves.

References

  1. Synthetic Media and Legal Quagmires: Unveiling Deep Fakes In The Indian Legal Context. [ https://www-livelaw-in.nujs.remotlog.com/articles/synthetic-media-and-legal-quagmires-unveiling-deep-fakes-in-the-indian-legal-context-247449].
  2. Deepfakes in India: Mixed response to advisory; government to notify tighter IT rules in a week. [ https://www.thehindu.com/sci-tech/technology/deepfakes-in-india-mixed-response-to-advisory-government-notify-tighter-it-rules-in-a-week/article67747422.ece].
  3. Deepfakes And Breach Of Personal Data – A Bigger Picture. [ https://www.livelaw.in/law-firms/law-firm-articles-/deepfakes-personal-data-artificial-intelligence-machine-learning-ministry-of-electronics-and-information-technology-information-technology-act-242916?infinitescroll=1].
  4. EU AI Act: first regulation on artificial intelligence. [ https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence].
  5. Delhi High Court grants Centre two weeks to respond to PIL on AI and deepfake regulation [ https://www.businesstoday.in/technology/news/story/delhi-high-court-grants-centre-two-weeks-to-respond-to-pil-on-ai-and-deepfake-regulation-412382-2024-01-09].

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.