In scientific terms, Artificial Intelligence is the mechanical simulation system of collecting knowledge and information and processing intelligence of the universe: (collating and interpreting) and disseminating it to the eligible in the form of actionable intelligence.1

The term was coined in 1956 by John McCarthy, a computer scientist. He defined AI as "making a machine behave in ways that would be called intelligent if a human were so behaving".2

Artificial intelligence (AI) is rapidly transforming many industries and professions, and arbitration is no exception. What holds good for [Human Intelligence], also applies to AI3. Despite traditional resistance caused by lawyers' conservativeness to embrace new technologies, technology is slowly creeping into legal practice and even international arbitration.4

In recent years, there has been a growing interest in the potential of AI to streamline and improve the arbitration process. This article explores the intersection of artificial intelligence and arbitration, looking at how AI is being used in this field and what the future may hold.

How is AI being used in arbitration?

There are several ways in which AI is being used in arbitration, including:

1. Document review and analysis

One of the most time-consuming and expensive parts of the arbitration process is document review and analysis. AI can be used to automate much of this process, allowing for faster and more efficient reviews of large volumes of documents. AI systems can be trained to recognize key terms, identify patterns, and flag relevant documents for review, saving time and reducing costs.

2. Predictive analytics

AI can be used to analyze data from past arbitration cases to identify patterns and make predictions about the outcome of future cases. This can be particularly useful in determining settlement offers and negotiating strategies.

3. Decision-making

AI can also be used to assist arbitrators in making decisions. For example, AI systems can be used to analyze evidence and provide recommendations to arbitrators based on patterns and trends in the data.

4. Language processing

AI can be used to analyze the language used in arbitration documents, such as contracts and agreements, to identify potential issues and ensure that the documents are clear and concise.

Framework for Legislation

Although the government has been promoting artificial intelligence and its subsequent applications, there are currently no particular laws in India that address big data, machine learning, or artificial intelligence, however, the process of developing laws, rules, and policies with a focus on governing and overseeing AI is already underway.

Even, the framework for online dispute resolution has undergone some significant adjustments thanks to the judiciary's leadership. The eCourts Mission Mode Project has been the catalyst for several initiatives. The Lok Adalat has evolved into the e-Lok Adalat, which is available online. The illustrious Supreme Court has taken advantage of AI's potential by developing the SUVAS (Supreme Court Vidhik Anuvaad Software), which converts judicial papers from English into nine regional languages.

The Niti Aayog committee, in its report,1 acknowledged the advantages of online dispute resolution and the contribution of AI to its success. The Report acknowledged the value of AI in the creation of India's online dispute resolution system. The use of AI in the creation of such a system can have a variety of advantages, such as the removal of human bias from the dispute settlement process. According to the report, ODR's objective is to replace the current model of dispute settlement, not to completely replace it. An illustration is the ODDRP (Online Dispute Diversification Resolution Platform) model from Zhejiang, which incorporates numerous ICT (Information Communication Technology) tools, such as cloud computing and artificial intelligence. In this case, AI has been used in tandem with an effective offline docking system to facilitate lawsuits and dispute resolution.

The potential for integrating technology into the legal system is still enormous. For instance, blockchain-driven arbitration procedures could be created for the creation of smart contracts. Computer-coded smart contracts have the potential to automate enforceability through the transfer of rights and duties, facilitating the management of disputes based on blockchain arbitration. The main legal frameworks that support blockchain contracts are the UNCITRAL Electronic Model Law on Electronic Commerce from 1996 and the UNCITRAL Convention on Electronic Communications in International Contracts from 2007.

By providing for electronic data records and electronic transactions during the arbitration process, articles 6 and 18 of the 2007 Convention provide clarification on on-chain arbitration. In general, issues with justice and data security arise during the implementation of this framework.

Legal issues:

The use of artificial intelligence (AI) in arbitration even though is rapidly evolving in the area of law, also raises complex legal and ethical issues. While AI has the potential to improve the efficiency and accuracy of the arbitration process, it also raises concerns about several legal and ethical issues inclusive of confidentiality, bias, and decision-making that must be addressed to ensure that it is used responsibly and fairly. Even dealing with default cases, can be a delicate and complex matter: many technical legal issues may arise, especially in an international setting, hence, cases require particular attention from a judge or arbitrator, sometimes even sua sponte. This is because in default cases, a judge or arbitrator must carefully consider the legal issues involved and ensure that the party seeking the default judgment has met all the legal requirements. This requires a human judge or arbitrator with legal training and experience who can evaluate the evidence and apply the law correctly. While artificial intelligence (AI) technology can help process large amounts of data and identify patterns, it cannot replace the judgment of a human judge or arbitrator. AI systems are programmed to analyze data and make predictions based on statistical models, but they cannot interpret legal rules and principles or exercise discretion.

Moreover, AI systems are only as good as the data they are trained on and may produce biased or flawed results if the training data is incomplete or inaccurate. This can lead to errors and inconsistencies in legal decisions, which can have serious consequences for the parties involved. Therefore, even in simple cases, a human judge or arbitrator is essential to ensure that the legal requirements are met and that the parties receive a fair and just outcome. While AI technology can assist judges and arbitrators in processing and organizing data, it cannot replace the legal expertise and judgment of a human decision-maker.

This article underlines that a judge or arbitrator cannot be replaced by a robot, not even in simple cases.6

1. Confidentiality and Data Privacy

One of the key legal issues surrounding the use of AI in arbitration is confidentiality and data privacy. AI systems rely on large amounts of data to learn and make predictions. This data may include sensitive information, such as personal or financial data, that must be protected to ensure confidentiality and privacy. Parties to an arbitration may need to take steps to ensure that any data used by AI systems is anonymized or otherwise protected to prevent unauthorized access.

As noted by The Brookings Institution's Artificial Intelligence and Emerging Technology in its report, "As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed."7

One example of conflict between AI and privacy law would be the European Union's General Data Protection Regulation (GDPR) which is a comprehensive data privacy law that sets out rules for how the personal data of EU citizens can be collected, processed, and stored. The law aims to protect individuals' privacy and give them greater control over their personal data. However, AI systems often rely on vast amounts of data to function effectively, which can conflict with GDPR's data privacy requirements. AI models may need to collect, store, and process personal data to learn and improve, which can create privacy concerns for individuals. Hence we must emphasize the importance of ensuring that AI systems used in arbitration comply with data protection laws and regulations like data protection laws, such as the European Union's General Data Protection Regulation (GDPR), which impose strict requirements on the processing of personal data.

Another potential conflict is the collection and processing of sensitive personal data. Data protection laws typically require explicit consent from individuals to collect and process sensitive personal data, such as health information or racial or ethnic data.

AI algorithms can require this type of data to train effectively, which can create difficulties for organizations looking to comply with data protection laws.

The right to access and rectify personal data might also be a point of concern. Data protection laws generally give individuals the right to access their personal data, request rectification or erasure, and object to its processing. However, AI algorithms can make decisions based on complex data sets, making it challenging for individuals to understand how their personal data is being used.

Finally, there is a conflict between data protection laws and AI when it comes to data security. Data protection laws require organizations to take measures to protect personal data from unauthorized access, disclosure, and destruction. However, AI algorithms rely on large data sets, making it difficult to keep them secure.

Hence, Parties to arbitration must ensure that any AI systems used in the process are compliant with these laws and regulations to protect the privacy and confidentiality of individuals involved in the process.

2. Bias in AI Systems

Another legal issue surrounding the use of AI in arbitration is the potential for bias in AI systems. The difficulty that arises is that AI systems can be trained on biased data or algorithms, which may lead to unfair or unjust decisions in the arbitration process. Niti Aayog in its 2020 report noted "While individual human decisions are not without bias, AI systems are of particular interest due to their potential to amplify its bias across a larger population due to large-scale deployment."8

Bias in training can interfere with machine learning in two different ways, First, if the training data is biased, the algorithm will simply reflect the existing bias by encoding and reproducing it.9 An oft-cited, real-life example involves Amazon, a U.S. tech giant, Amazon's recruiting algorithm showed bias against women when it taught itself that male candidates were preferable and penalized women applicants because most resumes submitted to the company came from men, reflecting the tech industry's current male dominance.10

Data, therefore, can simply reflect current societal or historical imbalances stemming from race, gender, and ideology producing outcomes that do not reflect true merit.11

Hence, it is important to implement appropriate measures to ensure that AI systems are free from bias or discrimination, such as using algorithms that are designed to eliminate or reduce bias. It is important to ensure that any decisions made by AI systems are transparent and explainable. Transparency and explainability are essential to ensure that the arbitration process is fair and impartial. Parties to an arbitration may need to implement appropriate measures to ensure that any decisions made by AI systems are subject to review and challenge by human arbitrators and that the decision-making process is transparent and understandable.

3. Decision-Making in AI Systems

A third legal issue surrounding the use of AI in arbitration is the role of decision-making in AI systems. AI systems can be used to assist arbitrators in making decisions, but they cannot replace the judgment and expertise of human arbitrators. Parties to arbitration must ensure that any decisions made by AI systems are subject to review and oversight by human arbitrators, to ensure that the decisions are consistent with the applicable legal standards and principles.

It is important to ensure that AI systems used in arbitration are designed to complement, rather than replace, human decision-making.12 AI systems should be used to assist human decision-makers, rather than replace them entirely. Parties to arbitration must ensure that any AI systems used in the process are designed and implemented responsibly and ethically, to ensure that human judgment and expertise remain at the center of the process.

4. Licensing and Bar Requirements

The legal profession is one of the oldest and most respected professions in the world. Lawyers are trained to provide legal advice and representation to individuals, businesses, and government agencies. However, the practice of law is highly regulated, and in most countries, a license is required to practice law. A law license is a permit that authorizes a lawyer to practice law in a specific jurisdiction.

The primary reason for requiring a license to practice law is to protect the public. Legal matters can be complex and confusing, and lawyers are trained to help individuals navigate the legal system. In addition to protecting the public, requiring a license to practice law also helps to maintain the integrity of the legal profession and also because practicing law requires specialized knowledge and training in legal principles and procedures, and is subject to ethical standards and professional obligations.

When it comes to AI, there are concerns that the lack of licensing and bar requirements may result in unqualified individuals or entities providing legal advice or representation using AI technology. This can have serious consequences for individuals or organizations that rely on this advice, as it may be inaccurate or misleading.

Additionally, AI technology is subject to biases and limitations that may affect its ability to provide accurate legal advice. For example, an AI system may not be able to account for the nuances of a particular legal case or the specific needs of a client.

Furthermore, the use of AI in legal practice raises questions about accountability and responsibility. If an AI system provides inaccurate or misleading legal advice, who is responsible for the consequences? Should the developer of the AI system be held liable, or the user who relied on the advice?

The case of DoNotPay:

Recently, DoNotPay, a US-based provider of legal services, which developed the chatbot, is currently being sued for allegedly giving the plaintiff "substandard" and "poor legal advice" on several occasions, including when drafting demand letters, independent contractor agreements, and small claims court filings. Jonathon Faridian, the plaintiff, thought he was purchasing legal documents and services that would be "fit for use from a lawyer that was competent to provide them". The robot's actions, he says, were "illegal," and he is suing for compensation.

To the dismay of its clients, DoNotPay is neither a robot nor a lawyer nor a law company. Per court papers, DoNotPay lacks a law degree, is not barred in any jurisdiction, and is not managed by a lawyer. DoNotPay is simply a website with a collection of, regrettably, subpar legal documents that, at best, fill in the blanks of a legal ad-lib using data entered by customers.

From a legal perspective, this issue raises several interesting questions about the definition of "practicing law" and whether or not an artificial intelligence (AI) program can be considered a legal practitioner.

From an Indian legal background perspective, this issue raises several interesting questions about the use of technology in the legal profession and the regulations governing the practice of law in India. In India, the Advocates Act, 1961 regulates the legal profession and sets out the requirements for practicing law. The Act defines an advocate as a person who is enrolled with a State Bar Council and is entitled to practice law within the territory of India. It is a criminal offence for a person to practice law without being enrolled as an advocate under the Act.

The question of whether DoNotPay is practicing law without a license in India will depend on whether it meets the definition of practicing law under the Advocates Act, 1961. If the program is providing legal advice or services to users in India, it could potentially be considered practicing law without a license. However, it is important to note that the use of technology in the legal profession is not explicitly regulated by the Advocates Act, 1961.

In the United States, the practice of law is generally defined as providing legal advice or services to others. The specific definition of what constitutes the practice of law varies by state but typically includes activities such as representing clients in court, drafting legal documents, and providing legal opinions.

Hence, the question then arises as to whether a robot lawyer like DoNotPay, which is designed to provide legal advice and assistance to users, falls within the definition of practicing law. The answer to this question will likely depend on the specific facts of the case and the state in which the lawsuit is being brought.

Another important consideration is the role of legal ethics in the use of technology in the legal profession. In India, advocates are required to adhere to a strict code of ethics, which includes duties to clients, the court, and the profession. As technology continues to play an increasing role in the legal profession, it will be important to ensure that these ethical obligations are upheld, even in the context of AI-powered programs.

One argument in favour of the position that DoNotPay is practicing law is that the program is providing legal advice and assistance to users, which is one of the core functions of a licensed attorney. Additionally, some states have specific rules that prohibit non-lawyers from providing legal advice or assistance, which could potentially apply to DoNotPay.

On the other hand, there may be arguments that DoNotPay is not practicing law, but rather providing a form of legal information or selfh-elp. Additionally, it may be argued that since DoNotPay is a computer program and not a human being, it cannot be considered a legal practitioner and is not subject to the same licensing requirements as human attorneys.

Ultimately, the outcome of this lawsuit will depend on the specific facts of the case and the legal arguments presented by both sides. It is an interesting issue that highlights the ongoing legal and ethical debates surrounding the use of artificial intelligence in the practice of law.

This is the reason every state in the country regulates the profession of law. The majority of the time, people who seek legal assistance do not completely comprehend the law, its implications, or the l.

egal documents or procedures they are asking DoNotPay for assistance with. Therefore, it is important to establish licensing and bar requirements for individuals or entities that provide legal advice or representation using AI technology. Requiring a license to practice law is essential to protecting the public and maintaining the integrity of the legal profession. This can help ensure that individuals who rely on this advice receive accurate and reliable information and that legal professionals using AI technology are held to the same ethical standards and professional obligations as other legal practitioners.

Additionally, regulators and professional organizations may need to develop standards and guidelines for the use of AI in legal practice, to ensure that AI systems are developed and deployed responsibly and ethically.

Conclusion

The use of AI in arbitration is a rapidly evolving area of the law that raises complex legal and ethical issues. Parties to arbitration must take steps to ensure that any AI systems used in the process comply with data protection laws and regulations, are free from bias or discrimination, and are subject to review and oversight by human decision-makers. As noted by various authors, transparency, explainability, and human oversight are key to ensuring that the use of AI in arbitration is fair and impartial.

As AI technology continues to evolve, the legal and ethical issues surrounding its use in arbitration will likely continue to evolve as well. It is important for parties to an arbitration to stay up-to-date on the latest developments in this area of the law, and to take appropriate measures to ensure that any AI systems used in the process are designed and implemented responsibly and ethically.

Overall, the use of AI in arbitration has the potential to significantly improve the efficiency and accuracy of the arbitration process, while also raising important legal and ethical issues. By carefully considering these issues and implementing appropriate measures to address them, parties to an arbitration can help ensure that the use of AI in the process is fair, impartial, and in compliance with applicable laws and regulations.

Footnotes

1 Prof Dalvinder Singh Grewal, PhD Dean R & D Desh Bhagat University, Mandi Gobindgarh, A Critical Conceptual Analysis of Definitions of Artificial Intelligence as Applicable to Computer Engineering, India IOSR Journal of Computer Engineering (IOSR-JCE) e-ISSN: 2278-0661, p- ISSN: 2278-8727Volume 16, Issue 2, Ver. I (Mar-Apr. 2014), PP 09-13 www.iosrjournals.org

2 Professor John McCarthy, 'What is AI/ Basic Questions', (http://jmc.stanford.edu/artificial-intelligence/whatis-ai/index.html ).

3 See H.J. Snijders, Arbitrage en AI: Van arbitrage naar robotrage en van menselijke arbiter naar robotarbiter?, Tijdschrift voor Arbitrage 3 (2019).

4 Gabrielle Kaufmann-Kohler, Thomas Schultz, 'Online Dispute Resolution: Challenges for Contemporary Justice' (Kluwer Law International, 2004) p. 27.

5 NITI Aayog, "Designing the Future of Dispute Resolution (the ODR Policy Plan for India), 2021" The NITI Aayog Expert Committee on ODR accessed in https://www.niti.gov.in/sites/default/files/2021- 11/odr-report-29-11-2021.pdf

6 Henk Snijders, 'Arbitration and AI, from Arbitration to 'Robotration' and from Human Arbitrator to Robot', (2021), 87, Arbitration: The International Journal of Arbitration, Mediation and Dispute Management, Issue, pp. 223-242, https://kluwerlawonline.com/journalarticle/Arbitrati on:+The+International+Journal+of+Arbitration,+Media tion+and+Dispute+Management/87.2/AMDM2021017

7 Cameron F. Kerry, 'Protecting Privacy in an AI Driven world', (2020): Report of the The Brookings Institution's Artificial Intelligence and Emerging Technology

8 NITI Aayog, 'Approach Document for India Part 1 - Principles for Responsible AI', (2020): accessed at https://www.niti.gov.in/sites/default/files/2021-02/Responsible-AI-22022021.pdf

9 E.g., Jason R. Bent, Is Algorithmic Affirmative Action Legal?, 108 GEO. L.J. 803, 812 (2020); Bob Lambrechts, May It Please the Algorithm, 89 J. KAN. B. ASS'N, Jan 2020, at 36, 41; Cofone, supra note 88, at 1404; Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 680 (2017); Solon Barocas & Andrew D. Selbst, Big Data's Disparate Impact, 104 CAL. L. REV. 671, 680-81 (2016).

10 Jeffrey Dastin, Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women, REUTERS, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G (Oct. 9, 2018).

11 Gizem Halis Kasap, Can Artificial Intelligence ("AI") Replace Human Arbitrators? Technological Concerns and Legal Implications, 2021 J. Disp. Resol. (2021) Available at: https://scholarship.law.missouri.edu/jdr/vol2021/iss2 /5

12 Harry Surden, 'Machine Learning and Law' (2014) 89 Washington Law Review 87, 105.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.