Thanks to Marianna Riedo for collaborating on this article

In its recent ruling in the case of Thaler v. Perlmutter,1 the United States District Court for the District of Columbia affirmed that artwork generated solely by artificial intelligence (AI) does not qualify for protection under the Copyright Act.

The court dismissed the plaintiff's argument that copyright law's adaptability to emerging technologies encompasses AI authorship. Judge Beryl A. Howell reasoned that although copyright law is designed to evolve with the changing times and embrace novel forms of expression, the essential requirement of human authorship—particularly human creativity—remains an unchanged and fundamental aspect of copyrightability.

This ruling is in perfect alignment with the long-established view of the U.S. Copyright Office ("USCO") that asserts that a copyrightable work must involve human authorship. However, it is a first-of-its-kind federal court decision, setting a precedent for those who wish to establish ownership and copyright protection for content generated by AI.

BACKGROUND

The case revolves around a copyright application filed by inventor Stephen Thaler in 2019 for a visual artwork titled "A Recent Entrance to Paradise." As Thaler allegedly created the artwork simply by asking his generator (called the "Creativity Machine") to produce an image, he then applied for copyright registration with his computer system listed as the artwork's creator. He argued that copyright should be issued and then transferred to him as the machine's owner.

Thaler did not claim he had any role in the creation of the image, except that he owned the Creativity Machine and was seeking to register this computer-generated work as a work-for-hire.

The USCO denied his application based on the fact that the work "lacked the human authorship necessary to support a copyright claim." In a request for reconsideration, Thaler challenged the human authorship requirement as "unconstitutional and unsupported by either statute or case law."

Upon reconsideration, the USCO confirmed its previous decision that the Copyright Act provides protection only to human authors and categorized AI-generated work as similar to other works created by non-human authors that were denied registration in the past. The USCO also dismissed Thaler's argument regarding work-for-hire, explaining that although this doctrine permits a human author to transfer ownership of a copyright to a corporate, non-human employer through a contract, it does not imply that the employer itself created the copyrighted work. Furthermore, the USCO clarified that AI lacks the legal status to enter into such contracts or to be classified as an employee. The court agreed, stating that "human creativity is the sine qua non at the core of copyrightability, even as that human creativity is channeled through new tools or into new media."

In short, by relying on the "plain text of the Copyright Act" and ruling that since the work lacked human authorship, no copyright existed in the first place, the Court focused on the narrower question of whether any work would be eligible for copyright "[i]n the absence of... human involvement in the creation of the work." However, the Court abstained from delving into other challenging questions, such as how much human involvement is necessary to allow copyright registration of non-human creations.

THE CRUX OF THE MATTER: LEGAL SUBJECTIVITY OF AI

As mentioned above, this ruling comes as no surprise within the context of the ongoing conflict between AI creations and traditional copyright law. The dilemma of whether copyright may be granted to "non-human" creations has been discussed since well before the widespread deployment of generative AI (one pertinent example is the famous "Naruto selfie" decision in 2018, regarding a photograph materially taken by a macaque using a camera owned by photographer David Slater, who purposely left the camera with the animal).

The crux of the debate remains essentially whether a non-human entity—be it a macaque or an AI system—can be recognized as an autonomous legal subject and therefore a potential holder of rights (and liabilities).

Aside from the sensationalistic case of the robot Sophia, which in 2017 was granted citizenship in Saudi Arabia, countries that have been approaching this technology are (so far) virtually unanimous, and their answer is a definite no.

Even the European Union, which through the proposed AI Liability Directive and the revised Product Liability Directive is pioneering the creation of an innovative legal framework for liability for damages caused by AI, seems determined to identify AI as a mere product, the use of which is up to the developer and/or deployer. The current proposals even envisage a sort of "enhanced" liability in the case of high-risk AI systems that can presume a causal link between the damage caused and the conduct of the developer or deployer who failed to fulfill the duty of care.

In short, there is no recognition of legal subjectivity for AI on the horizon, yet this would be essential to ensuring that the results of its work are credited.

How this position will affect the evolution of the creative industry, however, remains an open question still open to answers, which will be left to the lively debate and careful scrutiny among industry experts in the coming months and years.

Footnote

1. Case No. 1:22-cv-01564, (D.D.C. 2022), available at the following link: https://ecf.dcd.uscourts.gov/cgi-bin/show_public_doc?2022cv1564-24.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.