1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

Currently, India has no codified laws, statutory rules or regulations, or even government-issued guidelines, that regulate AI per se. The obligations on this subject are set out in the Information Technology Act 2000, and the rules and regulations framed thereunder. Of late, the, Ministry of Electronics and Information Technology (MEITY) has constituted a few committees and has also released a strategy for the introduction, implementation and integration of AI into the mainstream.

1.2 How is established or ‘background' law evolving to cover AI in your jurisdiction?

The draft Information Technology [Intermediaries Guidelines (Amendment) Rules) 2018, issued by MEITY (www.meity.gov.in/writereaddata/files/Draft_Intermediary_Amendment_24122018.pdf), have been subject to review and stakeholder comments, and representations have been sought. Some of the comments factored into the discussion revolve around AI and the roles of intermediaries that employ AI in their ecosystems for various purposes. NITI AAYOG is a government thinktank which designs strategic and long-term policies and programmes for the government, and provides technical advice to central and state governments. It has also issued a report (https://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf ) welcoming participation in AI across all sectors.

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

Such a general duty will depend on the sector in which the business which is exploiting the AI tool is operating. For instance, where a social media intermediary deploys AI in its functioning, the laws which apply to social media intermediaries will prevail. Where the courts are faced with such an issue of first impression, reference can be made to the common law principles of tort law – in particular as developed by the UK courts, as the Indian courts look to international jurisprudence in emerging areas of the law.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and ‘escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

Where the courts are faced with such an issue of first impression, reference can be made to the common law principles of tort law – in particular as developed by the UK courts, as the Indian courts look to international jurisprudence in emerging areas of the law.

1.5 Do any special regimes apply in specific areas?

There are no special regimes which apply in specific areas.

Notably, one formal legal provision which acknowledges the use and deployment of AI concerns the implementation of AI in the healthcare ecosystem (telemedicine), to assist practitioners in decision making. This will be subject to any changes that are made to the intermediary guidelines.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

India has joined a league of leading economies – including the United States, the United Kingdom, the European Union, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea and Singapore – in launching the Global Partnership on Artificial Intelligence (GPAI). The GPAI is an international, multi-stakeholder initiative to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation and economic growth.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

In the absence of any specific laws in this regard, MEITY is the executive agency which oversees strategies relating to AI. The Supreme Court and the high courts have the constitutional responsibility of enforcing fundamental rights. The law enforcement agencies are bound to execute and enforce court orders. The judiciary in this way plays an important role in enforcing the applicable provisions and developing the law in areas where the relevant statutes either do not exist or are silent.

1.8 What is the general regulatory approach to AI in your jurisdiction?

NITI AAYOG has collaborated with several leading AI technology players to implement AI projects in critical areas such as education, agriculture and health. The Department of Telecommunication has also established an AI standardisation committee to develop various interface standards and develop India's AI stack. The general regulatory approach towards AI is transparent and conducive to growth.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

Financial services and high tech and telecommunications are the leading sectors in AI adoption. The manufacturing sector was also one of the first movers in implementing advanced robotics at scale.

2.2 What AI-based products and services are primarily offered?

As recommended by the Artificial Intelligence Task Force in its 2018 report to the Indian government, the following areas afford the greatest potential to realise gains from AI-led developments:

  • healthcare;
  • financial services;
  • education;
  • consumer and retail;
  • public and utility services; and
  • agriculture.

India has seen growth in both the fintech sector and the health sector, with the development of technologies that can detect anomalies and provide low-cost alternatives to the old expensive mechanisms.

2.3 How are AI companies generally structured?

AI is a recent advancement in India and many AI companies are start-ups. The preferred structure for start-ups is a private limited company, which can also receive foreign direct investment should there be a requirement for a capital infusion into the company.

2.4 How are AI companies generally financed?

Most AI companies in India are generally financed through international and domestic capital. International collaboration also takes place via technology transfer (from India to abroad and vice versa).

2.5 To what extent is the state involved in the uptake and development of AI?

The Indian state has played an active role in the application and proliferation of AI. Whether in relation to IT, defence, fintech or agriculture, the state is exploring exciting new ways to pursue AI-developed technology.

The independent sectoral regulators, including the Securities and Exchange Board of India, are acquiring capabilities to monitor and analyse social media posts to keep tabs on possible market manipulations. AI capabilities are set to be deployed to detect market manipulation.

Similarly, the Reserve Bank of India introduced a framework for a regulatory sandbox1, with a well-defined perimeter and duration, which will allow the financial sector regulator to provide the requisite regulatory guidance in order to increase efficiency, manage risks and create new opportunities for consumers. The sandbox aims to facilitate the testing of innovative technologies such as AI, application programming interface services and blockchain technology.

The Karnataka state government has launched a Centre of Excellence for Data Science and AI2 in collaboration with the National Association of Software and Service Companies. To be established with around INR 400 million, the centre will be a first-of-its-kind hub based on a public-private partnership model and will accelerate the ecosystem in Karnataka by providing the impetus for the development of data science and AI across the country.

1. https://www.rbi.org.in/Scripts/PublicationReportDetails.aspx?UrlPage=&ID=938

2. https://indiaai.gov.in/ministries/government-of-karnataka

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

The Indian healthcare system is heterogeneous and the National Digital Health Mission (NDHM) was launched with the aim of creating digitised records of all doctor-patient interactions. With the recent notification of new regulations on telemedicine, it has been acknowledged that AI may be used for the purpose of evidence-based decision making. The NDHM is leveraging machine learning and AI with the aim of digitising healthcare records and assisting in building evidence-based healthcare delivery tools.

(b) Security and defence

In February 2019, the Ministry of Defence established a high-level Defence AI Council3 under the chairmanship of the minister of defence, tasked with providing strategic direction on the adoption of AI in defence. The government intends to interact with the private sector to evaluate all available options for the implementation of AI in the sector.

(c) Autonomous vehicles

The Motor Vehicles Act 2019 per se does not provide for autonomous vehicles (AVs). However, it contains an exemption that may allow for testing of AVs as follows:

Notwithstanding anything contained in this Act and subject to such conditions as may be prescribed by the Central Government, in order to promote innovation and research and development in the fields of vehicular engineering, mechanically propelled vehicles and transportation in general, the Central Government may exempt certain types of mechanically propelled vehicles from the application of the provisions of this Act.

(d) Manufacturing

Indian manufacturing companies have made attempts to move towards factory automation solutions to improve product quality and design, reduce labour costs, minimise the manufacturing cycle and monitor the real-time condition of machines. However, new AI-based hardware and software are being adopted in an unregulated area, without clear regulations on workers' rights, liability of AI software, data privacy or cybersecurity.

(e) Agriculture

AI in agriculture is focused on the modernisation of agricultural activities. With more than 500-plus agritech start-ups now operating in India, momentum is gathering pace. Many of these start-ups are leveraging technologies such as AI and machine learning to improve efficiency and yields, speed up agricultural financing and achieve other advantages that should promote India's agricultural growth. Companies such as Satsure are using space technology to assess the risks associated with the agriculture industry, ranging from weather variability to frequent natural disasters, uncertainty in crop production and market prices, lack of effective rural infrastructure and market information asymmetries that reduce the efficacy of risk.

(f) Professional services

While there are solutions available in the market in this sphere, there is not much publicly available data to discuss.

(g) Public sector

INDIAai (the National AI Portal of India) – a joint venture of the Ministry of Electronics and Information Technology, the National E-Governance Division of the Department of Electronics and Information Technology and the National Association of Software and Service Companies – has been set up to prepare the nation for an AI future. It is the central knowledge hub on AI and allied fields for aspiring entrepreneurs, students, professionals, academics and other stakeholders. The portal focuses on creating and nurturing a unified AI ecosystem to drive excellence and leadership in India's AI journey, to foster economic growth and improve lives. The government is aligning with the private sector to use AI for social empowerment, inclusion and transformation in key areas such as healthcare, agriculture, education and smart mobility.

3. https://www.ddpmod.gov.in/sites/default/files/AI.pdf

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The main data protection legislation in India is the IT Act and the rules, regulations and guidelines issued thereunder – more specifically, the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011. On 11 December 2019, the minister of electronics and information technology introduced the Personal Data Protection Bill 2019 (PDP Bill), which has since been referred to the standing committee and is pending formal enactment. The draft Non-personal Data Governance Framework was released seeking comments from the public on the regulation of non-personal data.

In line with the EU General Data Protection Regulation, the PDP Bill seeks to limit the effects of automated decisions – in particular, by allowing individuals to control their personal data and its use – and to introduce structural changes aimed at entities that use personal data. In particular, the bill provides individuals with a (limited) right to access, rectify and erase personal data, which includes inference for the purpose of profiling.

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

There are no comprehensive regulations on cybersecurity. The prevailing legislation is the IT Act and the rules and regulations framed thereunder. The expected amendments will govern cybersecurity in India. The Information Technology (Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules 2013 define ‘cybersecurity incidents' as any real or suspected adverse event in relation to cybersecurity that violates an explicitly or implicitly applicable security policy, resulting in:

  • unauthorised access;
  • denial of service or disruption;
  • unauthorised use of a computer resource for processing or storage of information or changes to data; or
  • unauthorised access of information.

There are obligations to report such incidents to the Indian Computer Emergency Response Team, which is the responsible agency in the country.

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

In terms of competition, AI may pose challenges such as:

  • market foreclosure and related exclusionary practices;
  • novel ways of collusion; and
  • new strategies for price discrimination.

AI may also raise concerns about technological sovereignty and wealth inequality. Due to the self-learning aspects of AI and machine learning tools, much functioning may occur without the knowledge of the coders or programmers. While this practice is being studied abroad, it has not yet been examined in India within the framework of the Competition Act 2002.

The Indian regulators are yet to analyse, study and release reports that discuss the use or potential abuse of data, which may lead to the disruption of competition.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

AI is yet to be adopted within the Indian workforce ecosystem; there is still a long way to go in terms of both implementation and the regulatory mechanisms that govern it. However, through several initiatives, NITI Aayog is collaborating with several universities in India to enable the workforce to adapt and learn using AI tools.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

Typically, as AI systems do not have the intelligence to ascertain whether they are learning from the right or wrong data source, there is a real risk of data manipulation. If a person intentionally feeds inaccurate data to an AI system, this may lead to a manipulation of the outcomes and may also undermine the integrity of both the data sets and the outcomes. The PDP Bill imposes stringent measures in order to maintain the accuracy of data.

8 AI best practice

8.1 There is currently a surfeit of ‘best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

As the AI market is still in its infancy, it is experiencing a period of growth and development. Building further on the National Strategy on AI released in 2018, NITI Aayog is seeking to develop an approach that would realise the economic benefits of AI in a responsible manner, both for users and for broader society. This approach aims to establish broad principles for the design, development and deployment of AI in India – drawing on similar global initiatives, but grounded in the Indian legal and regulatory context. A draft document was released for public consultation in July 2020, entitled "Working Document: Towards Responsible #AIforAll".

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

While the draft "Working Document: Towards Responsible #AIforAll" (see question 8.2) is still in the public domain and may be further revised in response to public consultation, it sets out the following principles to ensure that AI does not cause harm:

  • the principle of safety and reliability;
  • the principle of equality;
  • the principle of inclusivity and non-discrimination;
  • the principle of privacy and security;
  • the principle of transparency;
  • the principle of accountability; and
  • the principle of protection and reinforcement of positive human values.

The document addresses ethical concerns regarding the use of AI, but in its draft stage does not yet give guidelines about its general usage.

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

  • In our experience of AI tools implemented across various industry verticals, it is best to adopt a human-centred design approach. Transparency protocols should be baked into the system to ensure that clarity and control are available to users. Early testing should be modelled to demonstrate any potential adverse feedback, which could be then subjected to specific live testing; this could be utilised to inform further iterations before full-scale deployment.
  • It is also essential to employ multiple metrics which are appropriate to the context and the goals of the system. Surveys should enable the company to assess short-term and long-term product health.
  • Assess the raw data periodically, to ensure the integrity of the data and prevent data manipulation. This can be done in a privacy-compliant manner by assessing anonymised or aggregate data summaries.
  • Identify the limitations of your data sets and models. An AI tool cannot be repurposed for a different goal without actually changing the model, which may lead to inaccuracies in the results yielded.
  • Continue monitoring the system after deployment to factor in the feedback and bake it into the system.
  • Continue to learn from constant testing of the deployed AI tool or solution, and ensure that it is working as intended and that it can be trusted to work towards the desired goals.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

The use of AI for contracting purposes involves a higher degree of reliability. Prediction tools also assist in conducting due diligence and may assist in coordinating the actions of the contracting parties.

The use of AI in the contractual context should be evaluated properly and should be aligned with the breach conditions (material breach triggers) to determine the consequences in case of failure of the AI tool. Further, there is little jurisprudence on the creation of intellectual property, so this issue will rely on the applicable policies and governing laws.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

The National Strategy on AI, released in 2018, addresses the issue of liability by drawing parallels with the airline industry, in which every accident is investigated to eliminate loopholes in security, thus making the industry safer and service providers more accountable. It also proposes the establishment of an ethics council at every centre of excellence (ie, the organisation proposed to be established to advance research in AI), to set standards with regard to the liability and accountability of developers and users of AI.

A report issued by the European Union in 2019 recommends a default rule of strict liability for producers and certain operators for defects in products or digital content incorporating emerging digital technology. Similar analogies may be adopted in the country.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

Typically, prejudice in the data which is fed to the system may lead to potential bias and discrimination. The quality of data being fed into the system determines the quality of the solutions produced by the software. If the data is erroneous, incorrect or biased, then the system will produce biased results; examples abound with Amazon and other big data firms.

As the base data fed into AI systems, the exhaustive, step-by-step instructions, the inherent perspectives and the assumptions are all controlled by humans, it is possible that a person can unintentionally build bias into the system.

To mitigate such bias, several metrics must be modelled on the basis of class labels (ie, class, race and sexual orientation). Once the possible parameters of discrimination or bias have been established, they will need to be constantly monitored and evaluated before actual deployment, to ensure that bias is not determining the output.

As the output cannot be entirely predicted, tight regulation of the quality of input data is vital. To minimise bias, outliers must be monitored by applying statistics and data exploration.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

The AI market is still in its infancy and is being promoted by the Indian government as a part of its ‘Digital India' initiative. NITI Aayog is playing a central role in the formulation of policy to create and protect innovation. The AI Research, Analytics and Knowledge Assimilation Platform is a cloud computing platform for big data which has been announced by NITI Aayog. The platform will focus on big data analytics and similar tasks, and will support multi-tenant, multi-user computing, with resource partitioning, a dynamic computing environment and many more features.

Sectoral regulatory sandboxes are promoting innovation by providing innovators with limited data sets and restricted policy measures for implementation and governance, within a closed environment to determine the way forward in terms of regulation.

10.2 How is innovation in the AI space incentivised in your jurisdiction?

Regulatory sandboxes and ongoing dialogue with government agencies are the main measures that the government has adopted to incentivise innovation in the AI space. Additionally, the benefits which are available to start-ups are sector agnostic and will be available to all AI players.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

Employment is predominantly regulated by contractual arrangements and AI companies must rely on this framework to supplement any functions that will be fulfilled by AI instead of human resources. In the face of the ongoing Covid-19 pandemic, several companies are devising AI-based solutions which could be employed in manpower-reliant industries, to ensure continuity of work or business.

11.2 How can AI companies attract specialist talent from overseas where necessary?

There is no embargo on employment laws in the IT sector. Much like foreign direct investment, to which few limitations apply, it is possible for AI companies to attract and engage specialist talent from overseas.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

While there are several proposals in the pipeline, there are no restricting factors which currently limit the proliferation of AI in India. In the meantime, the impending privacy legislation and the framework for non-personal data will streamline the regulatory framework. Intermediary guidelines could be issued within the next calendar year.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

Given the lack of legal restrictions, the proposed frameworks and the government's intention to make it easy to do business in India, AI companies seeking to enter the Indian market should consider the following:

  • No foreign direct investment restrictions apply in the IT/IT enabled services industry verticals per se.
  • India's large population allows for diverse data sets and behavioural patterns which could be analysed to develop and create evidence-based templates for welfare sectors such as healthcare, agri-tech and social mobility.
  • The adoption of new approaches by health sector councils and government should enable AI companies to recruit locally.
  • In sectors which require bureaucratic intervention, AI companies could face certain issues. However, with the government's latest initiatives to overcome bureaucratic barriers, many processes are being streamlined and moving to entirely web-based/online portals, to make it easier to do business.
  • Companies should be proactive about cultural differences in the workplace.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.