"While AI service providers may not be directly responsible for the content created using their technology, they could potentially face legal consequences if they fail to implement safeguards or knowingly facilitate harmful activities."

Voice cloning, a technology that enables the replication of human voices from large language models using artificial intelligence (AI), presents both exciting possibilities and legal challenges. Recent machine-learning advances have made it possible for people's voices to be imitated with only a few short seconds of a voice sample as training data.

It's a development that brings exciting possibilities for personalized and immersive experiences, such as creating realistic voiceovers for content, lifelike personal assistants and even preserving the voices of loved ones for future generations. But it's also ripe with potential for abuse, as it could easily be used to commit fraud, spread misinformation and generate fake audio evidence.

The speed at which AI – and especially voice cloning – is advancing has therefore made some technology experts uncomfortable. For example, an open letter signed by over 30,000 people, including Elon Musk and Apple co-founder Steve Wozniak, called on AI labs to pause work on advanced AI systems while working to jointly develop shared safety protocols.

It's clear that greater clarity is needed to protect the intellectual property rights of individual voices. Here's how those issues are playing out and what could be on the horizon.

Existing Laws and Potential Legislation

While existing laws designed to protect privacy, prevent fraud and regulate consent may be applicable to voice cloning, new safety protocols may include the drafting of laws designed to address challenges unique to voice cloning technology.

Existing invasion of privacy tort laws regulate the collection, use and dissemination of personal information, including voice data. For example, defamation and false light laws causes of action may be appliable to instances of voice cloning where a false light allegation, unlike defamation, does not require the victim to prove reputational damages. However, the false light privacy tort is not currently recognized in all jurisdictions, including Florida and New York, and such cause of action may only be actionable by the party whose voice is being misappropriated, even if they are not the victim of the fraud.

Further, federal copyright law has not been extended to include proprietary ownership of one's voice since the voice sounds are not "fixed," as required by statute. That means without altering the Copyright Act, it would be difficult to use existing copyright law to protect an individual's voice against cloning.

Nevertheless, lawmakers are beginning to focus on dealing with the risks associated with AI. For example, on July 6, the U.S. Senate Committee on Banking, Housing and Urban Affairs, led by Chair Sherrod Brown, D-Ohio, sent a letter to the Consumer Financial Protection Bureau, urging the organization to "take action regarding the governance of artificial intelligence and machine learning in consumer financial products, especially as it relates to protecting consumers from fraud and scams." The senators also expressed concern that not only consumers but also financial institutions using voice authentication services may be vulnerable to breaches.

Further, the Senate Human Rights Subcommittee, chaired by Sen. Jon Ossoff, D-Ga, held a hearing on June 13 to hear from witnesses impacted by AI. During this hearing, Jennifer DeStefano of Scottsdale, Ariz., testified regarding her experience of getting a phone call that appeared to be from her daughter saying she had been kidnapped, only to discover later that her daughter was safe at home and that she had been listening to an AI-generated "deepfake" of her daughter's voice. "I will never be able to shake that voice and the desperate cry for help from my mind," DeStefano said. "There's no limit to the evil AI can bring. If left uncontrolled and unregulated, it will rewrite the concept of what is real and what is not." While Senator Ossoff responded stating that "this conduct should be criminal and severely punished," the perpetrator was not prosecuted in this case because DeStefano discovered the scam before any ransom was paid to the fake kidnappers, and thus, no crime was committed.

Warning Shots From the FTC

The Federal Trade Commission (FTC) issued a warning on March 20, titled "Scammers use AI to enhance their family emergency schemes," revealing that scammers are now cloning people's voices to make phone scams sound more convincing. Although no voice cloning enforcement actions have been released, the FTC's prohibition on deceptive or unfair conduct can apply to a party that makes, sells or uses a tool that is effectively designed to deceive – even if that's not its intended or sole purpose.

The FTC has even previously sued businesses that sold harmful technologies without taking reasonable measures to prevent consumer injury. Therefore, while AI service providers may not be directly responsible for the content created using their technology, they could potentially face legal consequences if they fail to implement safeguards or knowingly facilitate harmful activities. Individuals may report instances of such fraud to the FTC.

What's Next?

While there has been recent interest from lawmakers and federal agencies to address voice cloning, there needs to be a push for greater protection to address emerging challenges related to it. This could involve more specific provisions regarding voice cloning, expanding copyright applicability and instating protections against false light in jurisdictions that currently do not recognize the cause of action.

Originally published by IP Watchdog, August 9, 2023.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.