On January 9, State of Georgia Representative Mandisha Thomas introduced HB 887 (the Bill), which amends certain provisions of the Official Code of Georgia Annotated to regulate the use of artificial intelligence (AI) and other automated decision-making tools in determining outcomes related to healthcare, insurance coverage, and public assistance.

The Bill defines “Artificial Intelligence” broadly as any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing a real or virtual environment.” While this would likely include large language models that have been the focal point of AI in the media, such as ChatGPT, it would also appear to include more basic algorithms designed to determine outputs based on the characteristics of specific inputs.

With respect to healthcare companies, the Bill requires that any decision concerning healthcare and resulting from the use of AI be meaningfully reviewed by an individual with the authority to override the decision, with “healthcare” broadly defined as “any care, treatment, service, or procedure to maintain, diagnose, treat, or provide for an individual's physical or mental health or personal care.”  It is not clear what factors determine whether or not a review is “meaningful.” There are similar requirements and concerns under the Bill for decisions regarding insurance coverage and the award and payment of public assistance.

The Bill also grants the Georgia Composite Medical Board with the ability to promulgate rules to implement the Bill, including provisions governing disciplinary measures for physicians who fail to comply with its terms.

In effect, the Bill severely limits the ability of healthcare and insurance companies to use AI to streamline coverage and medical determinations since any determination other than insurance determinations in favor of the insured would require meaningful human review.

It is expected that other states will follow suit with similar legislation, and federal agencies have continued to issue guidance relating to the use of AI (see the  FAQ issued by CMS on February 6, 2024, regarding the use of algorithms or AI to make Medicare Advantage coverage decisions). As such, healthcare companies and insurers using AI should be prepared to implement procedures to ensure that human review of AI-based determinations is part of the final decision-making process.

Insurance companies, in particular, should already be taking care when using AI to deny claims without additional human review.  Multiple insurers are currently facing class action lawsuits for such use, and as case law regarding AI is just beginning to develop, it is not clear how courts will weigh in on this issue – and perhaps more importantly, what damages and liability insurance companies utilizing AI may face.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.