The term "Artificial Intelligence" is not helpful to our public discourse. Artificial Intelligence is not intelligent. The term encompasses too much, is poorly defined, and therefore can't be discussed precisely.

But it is important for policy-makers to understand what they are encouraging or prohibiting. Passing a law to "restrict artificial intelligence" is a dangerous exercise under current definitions.

Different functions of artificial intelligence create different problems for law and society. Generative AI creates not only new text, code, audio or video, but problems with deepfakes, plagiarism and falsehoods presented as convincing facts. AI that predicts whether a prisoner is likely to commit future crimes raises issues of bias, fairness and transparency. AI operating multi-ton vehicles on the road creates physical risks to human bodies. AI that masters the game the chess may not raise any societal issues at all. So why would politicians and courts treat them the same?

They shouldn't, but if people don't understand the distinctions between functional types of artificial intelligence, then they won't be able to make sensible rules. Our language is holding us back. We need to think differently about AI before treating it.

There can be useful arguments for defining artificial intelligence by the process used to create it. Large language models or other models built by shoveling tons of data into the maw of a machine-learning algorithm may be the purest form of AI. Politicians don't understand this distinction between these models and other functioning forms of code, and they shouldn't need to. Politicians don't care how to build an AI model; they only care what it does to (or for) their constituents.

Some of what we think of as AI is nothing more than complex versions of traditional computational algorithms. Standard big-data mining can seem miraculous, but no machine-learning modules are needed to elicit the desired results. And yet, when regulators discuss strapping restrictive rules onto AI, they would include standard algorithms.

Science fiction writer Ted Chiang has defined artificial intelligence as "a poor choice of words in 1954," preferring instead to call our current technologies "applied statistics." He also observed that humanized language for computer activities misleads our thinking about amazing, but deeply limited tools, like effective weather predictors and art generators. There is sorting, excluding, selecting and predicting in these applied statistical processes, but not context, thinking or intelligence in the human sense.

So whether our problem is understandable-but-unfortunate humanization of these models, whether it is imprecise thinking about what types of technology constitutes AI, or whether it is lumping together of disparate functionalities into a single unmanageable term, we are harming the discourse – and our ability to diagnose and treat disfunction – by using the term "artificial intelligence" in the present manner.

If we wish to police AI, our society needs to define and discuss AI precisely.

In the explosion of commentary surrounding generative AI, hand-wringing about singularities devolved into an oft-expressed desire to regulate and otherwise "build guardrails" for AI. Society's protectors, elected and otherwise, believe that we must stop AI before AI stops us, or at least before our use of AI foments foreseeable harm to populations of innocents.

What we casually call AI right now is a set of computerized and database driven functionalities that should not be considered – and certainly should not be regulated – as a single unit with a single rule. AI consists of too many tools raising too many separate and unrelated societal problems. Instead, if we wish to effectively legislate AI, we should break the definition into functional categories that raise similar issues for the people affected by the technology in that category.

What we casually call AI right now is a set of computerized and database driven functionalities that should not be considered – and certainly should not be regulated – as a single unit with a single rule.

I propose a modest organizational scheme below to assist lawyers, judges, legislators and regulators 1) grasp the present state of AI and 2) design rules to regulate the functions of machine learning modules. Some of these lines blur, and certain technical or social problems are shared across classifications, but thinking of current AI solutions in legally-significant functional categories will simplify effective rulemaking.

Each of these categories provides a unique set of problems. Legislators and regulators should be thinking of AI in the following functions.

Decisioning AI: This category is not defined by technology but by the technology's effect on people. Algorithmic tools are used to limit or expand the options available to people. AI ranks resumes for human resource managers, highlights who should interviewed for jobs, and evaluates the reactions of applicants in those interviews. For years algorithms have made prison parole recommendations, sorted loan applicants, and denied suspicious credit transactions. These decisions are subject to bias of various types, also raising issues of transparency, accuracy and reliability. The EU and some US states have already regulated this category of AI. The Chinese government has elevated these tools into a societal scoring system that can affect every aspect of a citizen's life and freedom.

Personal Identifying AI: Also defined by its effects on humans, certain AI is being used to pick an individual out of a crowd and name them, to extrapolate whose fingerprint was pulled from a crime scene, or to identify a person carrying a specific phone over specific geography. Most of Personal Identifying AI uses biometric readings from face, voice, gait or other traits tied to our bodies, but some is behavioral, including geolocation patterns, handwriting and typing. This type of AI has been implicated in Constitutional search and seizure issues, highlighted for findings that were biased against certain ethnic groups, and questioned for its trustworthiness.

Generative AI: By predicting the output requested from a series of prompts, certain AI tools build a word-by-word or pixel-by-pixel product that can mimic (or copy) human-looking creations. This can lead to working software code and functioning websites, art in the vernacular of Jan van Eyck, papers discussing the use of symbolism in The Scarlett Letter, or legal arguments in a contract litigation. These products raise intellectual property and plagiarism questions, and can be trained on improperly-obtained material. This technology can produce deep-fakes that are indistinguishable from actionable proof. When it works poorly, it can lead generate absolute nonsense presented as true fact.

Physical Action AI: Driverless vehicles operate on the interplay between sensors and predictive algorithms, and so do many industrial and consumer technologies. These are systems that use AI to function in the physical world. This category of AI can include running a single machine, like a taxi, or managing traffic systems for millions of vehicles or Internet of Things devices. Legal concerns not only involve safe product design and manufacture, but tort law, insurance issues, and blame-shifting contracts that affect all activity where the laws of physics and moving bodies apply.

Differentiating AI (Data Analytics): Some algorithms simply decide which items should be included or excluded from a specific group. While this sounds like a simple task, sheer numbers and/or complexity can make the work impossible for humans. Differentiating AI can spot a growing cancer from a shadow on an X-ray much more effectively than teams of trained radiologists. It can predict which cell in a storm may drop a tornado. It helps decide which picture shows a cat and which shows a dog. Both Decisioning AI and Personal Identifying AI are legally-significant subgroups of Differentiating AI, but I am proposing that this classification will not include tools designed to identify people or make subjective decisions about people, but instead suggests factual groupings that might lead to real-world consequences.

Strategizing AI: We all know about predictive machine-learning programs that mastered human strategy games like chess by playing millions of games. Strategizing AI makes predictions based on running simulations and humans use it to choose effective strategies related to those simulations. Strategizing AI raises concerns are accuracy and reliability of the predictions.

Military AI: An amalgam of each of the other functional types listed here, the military builds its own strategizing models, decisioning models and physical action AI. These tools raise some of the same issues as civilian versions, but an overlay of special purposes and the law of war create a unique category of considerations for military AI. Like most military tools and strategy, the rules for military AI will appear in international treaties and informal agreements between nation states. Civilian authorities are unlikely to develop effective limitations for the military's utilization of AI and algorithmic tools.

Automating AI: Certain AI models automate processes within business or government without making decisions about the opportunities of specific people. Automating AI may streamline an accounting system, research the case law on burglary, or provide a system of forms for running a company. It can replace human workers in some tasks, and therefore has a social impact.

AI exists in extensive forms and functionalities, so attempting to regulate the entire set of technologies would be overreaching and likely ineffective. The above categorizations provide a safer place to start if we wish to regulate a vast and shifting technology. By adopting this thinking, AI management becomes less daunting and more effective.

First published in Business Law Today 9/15/2023.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.