Artificial Intelligence (AI) is an ever-evolving, emerging technology that holds immense potential towards societal benefits, economic growth, and enhance global competitiveness at an unprecedented pace. The transformative impact of AI has often been likened to the significance of historical breakthroughs such as fire and electricity but the risks associated with AI have drawn comparisons to the dangers posed by nuclear weapons. Late Professor Stephen Hawking, while advocating for AI's potential benefits, expressed concerns about the future development of AI potentially spelling the end of the human race.

The risks associated with AI include privacy violations, data biases, security breaches, discrimination, a lack of transparency and accountability, and unethical uses of AI. Instances of inaccurate outcomes from AI applications have been witnessed across various sectors worldwide.

To address these risks, various nations are in the process of formulating policies and regulations for AI. There has been an attempt to adopt either horizontal or vertical approaches, or a combination of both. In a horizontal approach, regulators create comprehensive regulations overseen by a centralized authority, while a vertical strategy involves a bespoke approach with multiple sector-specific regulators. However, neither approach can fully stand on its own. A purely horizontal regulatory approach struggles to specify requirements for all AI applications effectively, while excessive vertical regulations may create compliance confusion for both regulators and companies.

The European Union (EU) has recently approved the AI Act, which blends elements of both horizontal and vertical regulations, primarily leaning towards a horizontal approach. Risk is at the core of the AI Act, categorizing AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimum or no risk. Unacceptable risk applications are banned, and developers of high-risk AI must comply with rigorous risk assessments and provide data for scrutiny by authorities.

Interestingly, shortly before the approval of AI Act, generative AI products gained massive popularity among users. ChatGPT, in particular, has been dubbed the "fastest-growing consumer application ever launched." EU lawmakers introduced the category of General-Purpose AI Systems to cover applications like ChatGPT and Bard, which have multiple applications with varying degrees of risk. However, generative AI still poses challenges in terms of regulation.

Indian Initiatives

Recognizing the immense potential of AI, amongst other initiatives undertaken by the Indian government, through Niti Aayog, issued the National Strategy for Artificial Intelligence in 2018, which included a chapter on responsible AI. In 2021, Niti Aayog released a paper titled "Principles of Responsible AI," outlining seven broad principles: equality, safety, inclusivity, transparency, accountability, privacy, and the reinforcement of positive human values.

Due to the absence of a comprehensive regulatory framework for AI systems in India, some sector-specific guidelines have been issued. For instance, in June 2023, the Indian Council of Medical Research issued ethical guidelines for AI in biomedical research and healthcare, while SEBI issued a circular in January 2019 regarding AI systems in the capital market. The National Education Policy 2020 also recommended including AI awareness in school curriculum.

However, given the nascent stage of the AI industry in India, there has been some hesitation in regulating AI. In April 2023, the Union Minister for Railways, IT, and Telecom stated to the Indian parliament that the government was not considering laws to regulate AI growth but acknowledged the associated risks and relied on papers issued by Niti Aayog. Subsequently, TRAI issued a comprehensive consultation paper in July 2023, recommending need for AI to be regulated and the establishment of a domestic statutory authority using a "risk-based framework," along with the formation of an advisory body.

During the B20 meeting in August 2023, preceding the G20 meeting, the Indian Prime Minister emphasized the need for a global framework for the expansion of "ethical" AI. This term implies the establishment of a regulatory body overseeing responsible AI use, similar to international bodies for nuclear non-proliferation. In the G20 meeting, the Indian Prime Minister proposed international collaboration to develop a framework for responsible, human-centric AI. G20 members agreed to pursue a pro-innovation regulatory approach that maximizes benefits while addressing risks.

As India's AI landscape evolves, it is crucial to strike a balance between regulation and cutting-edge innovation. India should establish AI guardrails that empower stakeholders to collaborate and introduce principles promoting innovation while addressing ethical concerns, privacy issues, and biases. The recent introduction of the Digital Personal Data Protection Act, 2023, has curtailed privacy risks associated with personal data used for AI development to a large extent but its implementation needs to be examined.

Draft of Digital India bill is likely to released shortly for public consultancy. The aforementioned legislation is intended to harmonise laws, regulate emerging technologies such as AI. Since draft Digital India bill will be released after G20 summit, there is expectation that this bill would address the sensitivities expressed by G20 members on AI.

India has the opportunity to position itself as a global leader in responsible AI development by formulating forward-looking AI regulations that resonate globally, similar to the G20 New Delhi Leader's Declaration, unanimously agreed by all the G20 members, which was once considered highly unlikely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.