This article was not written by ChatGPT. Will all articles have to start with a statement like this? And will any statement like this be true?

ChatGPT uses artificial intelligence, or AI, to develop written work product. While this application of AI has grabbed the news, there are many other exciting applications of AI, including in the domain of life sciences.

In this article, we start by defining AI in the context of data, algorithms and AI systems. Next, we touch on leading regulatory efforts in the U.S. and abroad, followed by a brief overview of some key issues in compliance. After that, we assess the intersection of AI and intellectual property law. And finally, we mention some of the applications of AI in life sciences.

Artificial Intelligence

AI starts with big data, which refers to large data sets which often come from multiple sources. The data sets include a substantial number of entries, or rows, with many attributes, or columns.

All of this data is analyzed in models which are used to explain, predict or influence behavior. Generally, models become more accurate when developed using more data, although the relationship between accuracy of models and amount of data is often nonlinear.

The Organization for Economic Cooperation and Development defines an AI system as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments."

AI systems are designed to operate with varying levels of autonomy. Therefore, AI systems may perform human-like tasks without significant oversight or can learn from experience and improve performance when exposed to data sets.

Frequently, an AI algorithm produces a model from a big data set over time, and that model can be used as a standalone predictive device. Naturally, output of AI will only be as good as the input data sets.

Machine learning is a subset of AI. Machine learning is an iterative process of modifying algorithms — step by step instructions — to better perform complex tasks over time.

In other words, machine learning applies an algorithm to improve an original algorithm's performance, often checking the output of an analysis in the real world and using the output to iteratively refine the analysis for future inputs. Effectively, machine learning evolves the original algorithm based on analysis of additional inputs.

The AI Regulatory Landscape

AI systems analyze large data sets and produce predictions and recommendations that often has a real-world impact in areas as varied as hiring, fraud prevention and drug discovery. These many AI applications mean that AI has attracted significant attention from policymakers and regulators, which means the AI-focused legal and regulatory landscape is changing quickly.

At the state level, bills or resolutions relating to AI have been introduced in at least 17 states in 2022. However, only a few states enacted laws in 2022 — just Colorado, Illinois, Vermont and Washington — and each was focused on a narrow application of AI.

While there is currently no horizontal federal regulation of AI, many generally applicable laws and regulations apply to AI, including in many life sciences contexts. These include the Health Insurance Portability and Accountability Act, which protects personal health data; Federal Trade Commission regulations against unfair or deceptive trade practices; and the Genetic Information Nondiscrimination Act, which prevents requesting genetic information in some cases.

Federal regulatory efforts on AI are focused on sector-specific regulations, voluntary standards and enforcement.

As an example of sector-specific regulations, the U.S. Food and Drug Administration has rules regarding medical devices that incorporate AI software to ensure safety of those medical devices.

As an example of voluntary standards, the National Institute of Standards and Technology is finalizing a framework to better manage risks to individuals, organizations and society associated with AI. The NIST risk management framework represents the U.S. government's leading effort to provide guidance for the use of AI across the private sector.

The FTC has indicated an interest in pursuing enforcement action based on algorithmic bias and other AI-related concerns, including models that reflect existing racial bias in health care delivery. Relatedly, the White House Office of Science and Technology Policy has created a blueprint for an AI Bill of Rights, citing health as a key area of concern for AI systems oversight.

Outside the U.S., the AI regulatory landscape is also developing rapidly.

For example, the European Union is finalizing the Artificial Intelligence Act, which would regulate AI horizontally — across all sectors — and is likely to have a significant global impact, much like what occurred with privacy laws.

The EU approach focuses on high-risk applications of AI, which may include applications in life sciences and related fields. Further, the U.S. and EU, through the U.S.-EU Trade and Technology Council, have developed a road map that aims to guide approaches to AI risk management and trustworthiness based on a shared dedication to democratic values and human rights.

AI Compliance Key Issues

AI raises a number of key issues for compliance including transparency and accountability or human in the loop, fairness and bias, explainability and interpretability, safety, security, and resiliency, reliability and accuracy and validity, and privacy.

We will briefly discuss the first three key issues in this article. Human in the loop refers to a human playing a role after AI makes a recommendation but before that determination is carried out in the real world.

In life sciences, it is critical to include humans in the process regardless of the regulatory requirements. For example, humans review AI drug discovery output and test that output in a wet laboratory to evaluate the AI output and improve AI's predictions.

Bias in AI means unwanted, unintended or unfair assumptions or prejudices built into AI systems often deriving from algorithms or data. Developers of AI systems should understand and evaluate for bias because bias limits AI's accuracy and efficacy and creates compliance and reputational challenges. Since data may not be neutral, bias may result from data collection practices.

For example, Winterlight Labs, the developer of an Alzheimer's detection model used speech recordings and later discovered that its technology was only accurate for English speakers of a specific Canadian dialect as a result of the training data that it used. Bias in the data may result in bias in the AI.

Explainability in AI means the ability to evaluate what output the AI system produces and at least some reasons for that output. Developers should be able to explain why certain data was or was not used. Developers should also be able to explain how a model predicts outputs based on the inputs.

Intellectual Property Rights in AI

The major categories of intellectual property are patents, trademarks, trade secrets and copyrights.

A patent protects novel inventions by giving the patentee exclusivity for that invention. A trademark protects branding by ensuring that only the owner can use a mark for a particular field. A trade secret is information that has independent economic value by not being generally known. A copyright protects original works of authorship such as books and music, as well as software.

Algorithms and models will often be protected as trade secrets. The algorithm of most major search engines is generally protected as a trade secret. If a company sought to protect the search engine algorithm with a patent, the algorithm would have to be published in the patent application. This would allow the public to see the algorithm described in detail and would enable copying.

But given that source code of a search engine cannot easily be reviewed, a patentee would not easily be able to determine whether a competitor used the patented search algorithm without permission.

AI concepts may be eligible for patent protection, but the U.S. Supreme Court's 2014 decision in Alice Corp. v. CLS Bank International requires something more than abstract ideas when seeking a software patent.

Therefore, a pure software algorithm will be hard to patent, but an application of AI with a physical-world impact may be patentable. The output of AI, such as discovery of a novel drug, should be patentable if the output otherwise qualifies as patentable.

However, courts have been skeptical of naming an AI system as the sole inventor of a patent. The U.S. Court of Appeals for the Federal Circuit has confirmed that an AI system cannot be the sole inventor of a patent.

Similarly, the U.S. Copyright Office has determined that creative works authored by AI are not eligible for copyright protection. While authorities in the U.S. and EU have rejected patent applications citing AI as the sole inventor, South Africa and Australia have ruled that AI can be considered an inventor on patent applications.

In business transactions involving AI, ownership and rights to use the AI system are generally divided among the parties in a few ways. Many software or service providers want the right to use data to improve their services or to improve their AI.

Three generic models in AI business transactions are (1) a service model; (2) a model rights approach; and (3) an algorithm rights approach. A service model refers to the AI provider running AI as a service while the customer provides input. The AI provider provides the output and some rights to use output to the customer. A model rights approach means the customer provides input into the AI, while the AI provider develops and refines the model.

Once the model is complete, the customer gets rights to use the model, but not the underlying AI or algorithm. The algorithm rights approach allows the AI provider to retain ownership of the underlying AI, while the customer receives some rights to the algorithm.

In the life sciences context, most AI providers will propose a service model, where the AI provider delivers output and rights to use the output. AI providers in life sciences aim to apply their AI neutrally, to all potential customers, and to refine their AI system using varied inputs.

Life Sciences Applications of AI

Life Sciences Applications of AI

Currently, it is possible to obtain massive data sets of small molecule interactions with target proteins, for example, using DNA-encoded libraries. Eventually, this might be possible with peptides.

For example, the discovery of novel molecules is possible through the application of AI to massive data sets of small molecule interactions with target proteins.

Recently, ZebiAI Therapeutics Inc. applied machine learning to data sets that were the output of DNA-encoded library screens. The AI output could be used to predict novel small molecule targets. Human beings still play an important role, including wet lab testing to confirm and refine results from AI-based analysis.

AI in Clinical Trials

Later stage — Phase II or Phase III — clinical trials have substantial data sets with data at the individual level. AI can assess historical data to predict outcomes such as (1) whether there are subpopulations of patients with better outcomes, (2) how adverse events are distributed, and (3) what subject characteristics have better outcomes.

AI in Genomics

AI has improved our understanding of patterns in a genome, i.e., an organism's complex set of DNA data. Next-generation sequencing can efficiently determine the order of the basic structural unit of DNA. Next-generation sequencing can gather genetic data rapidly from individuals and has driven the price of whole genome screening down to as little as $2,000, and soon for less.

Applying AI to massive genomic data sets that become increasingly available as the price of sequencing drops may improve predictions around who may develop a disease or whether certain actions may reduce risk for a disease.

Conclusion

While ChatGPT has grabbed the headlines by being able to write short essays, AI has many other applications, including making a real difference in the life sciences industry.

The opportunities are enormous. While AI innovation has outpaced regulation, development and use of AI systems are not without challenges, including compliance and reputational challenges.

Companies should focus compliance and due diligence on managing the features and risks of AI. Companies must also stay abreast of regulatory developments and prepare for how new laws and policies will have a direct impact on their development and use of AI-based technologies.

Ariel Soiffer is a partner and Elijah Soko is an associate at WilmerHale.

Paul Lekas is the head of global public policy at Software & Information Industry Association.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.