Artificial intelligence (intelligence exhibited by machines; AI) is experiencing rapid growth, evident from last year's record levels of investment in this industry. AI has also already taken centre stage this year by dominating the Consumer Electronics Show in Las Vegas, where it has been featured in everything from driverless cars to electronic toothbrushes. The excitement associated with the increasing ubiquity of autonomous machines may have, however, overshadowed some of the challenges associated with this technology. Recent reports, including one from the U.S. government, outline how artificial intelligence will disrupt labour markets and cause deep structural changes in the economy.1 From a legal and regulatory perspective, a number of issues arise, including attribution of liability, ownership of algorithm-generated works and agreeing on a working definition of AI.2

The weight and complexity of the considerations above vary across industries, but the prevailing theme in all sectors is likely to be that AI systems will continue to reach and exceed human performance in more and more areas. This is certainly the case in the entertainment sector. AI deployed in smart home-speakers and interfaces, such as Amazon Echo, Google Home and DingDong, is already transforming the home-entertainment experience by helping users choose songs and content. As AI technology gets smarter, it will further revolutionise content-recommendation processes, with personalised playlists and algorithmic discovery reshaping consumption patterns. By selecting your next item of content, AI software will effectively dictate which creators go from 100 to 100,000 consumers and which ones never get a claim to fame. This in turn may shape content owners' revenues and their respective market shares.

Currently, machine-learning based technology is being used to drive fan engagement via music-focused chatbots. It assists in discovery by enhancing streaming services' cataloguing capabilities, helps in generating background tracks for presentations and supports talent-spotting by trawling heaps of data to identify up-and-coming online artists. The music industry is also slowly embracing algorithm-generated compositions, a case in point being Sony's Flow Machines project, which has successfully created two entire pop songs. Sony's first fully AI-written pop album is expected to be released later on this year.

Further deployment of AI technology will render the costs of producing background music for online videos, games, apps and public spaces close to nil. There is a concern that adoption of cost-effective AI composers will displace human input. This, however, conceals the fact that music is almost as much about personality and the backstory as it is about the end product. Nonetheless, the technology will prove invaluable as it aids the creative processes and enables human composers to experiment with new ideas more quickly. Our view is that, in the foreseeable future, AI will be used as a subordinate tool rather than constitute an 'autonomous author' of works. Human input is still required to set the parameters, prompt the style or polish the machine's final product. However, as technology advances, it is conceivable that computers, once switched on, could go on forever creating their 'own' content.

As a transformative technology, AI is challenging a number of legal assumptions. We have seen this in the context of autonomous transportation, where authorities have been forced to start working on safety standards and conditions for the operation of driverless vehicles on public roads. As AI technology matures and conquers the musical mainstream, assumptions underpinning current copyright regimes will come under strain. In the UK, under the Copyright Designs and Patents Act 1988, works may be computer-generated. In those situations, the author is taken to be the "person by whom the arrangements are necessary for the creation of the work are undertaken" (section 9(3)). There is consensus that for computer-generated works, it is the programmer who devises the rules and logic used to create such works who owns the copyright. That said, cases considering this subject matter are scarce and outdated.3 Such proposition has also the potential to obscure the nature of such works and the process by which they are created. Arguably, a fully autonomous artificially intelligent device will create works without human input and could, in theory, be the true 'author' of creative works as a matter of fact.

The U.S. Copyright Act 2011, in its current form, contains no provision which would attribute the ownership of a computer-generated work to the programmer who generated the codes that created the work. Section 201 of the Act simply states that copyright vests in the author or authors of the work. At the EU level, the European Commission's recent study on emerging technologies simply concluded that further research is needed to assess whether the current application of intellectual property rights "sufficiently meets the needs of the robotic industry and society at large".4

Equally interesting is the question of liability (i.e., the extent to which people can be held legally liable for increasingly autonomous agents). There is a margin of assurance that the technology cannot go too far if there is a human wielding the controls. Looking forward, though, a fully autonomous system could be unpredictable. While AI creators could be held strictly liable, they may not necessarily have foresight of the actions that these systems may take.5 Depending on the functionalities, it may be that the burden of liability could be shifted to the end-user that deployed the AI in a way that wreaked havoc. One thing is certain: in an age of increasingly powerful AI, the chain of causation may be difficult to follow since neither the user nor the creator may have complete oversight of a machine's actions. The difficulty is further compounded by the fact that AI systems learn from environmental data in ways that their creators may struggle to understand. For the content industries this means that adoption of algorithm-generated content may give rise to a number of copyright infringement cases. AI software, unaware of copyright laws, may reproduce protected works without their creators' knowledge.

Existing legislative landscapes were not designed with AI in mind and while the law captures AI at a high level, the statutory coverage is not ideal. A dearth of precedent means that any deals involving AI technology will require attentive drafting which considers all stakeholders, including the programmers, the original creators (from whom the AI draws its inspiration), the intended owners and any end-users. From our experience of advising in this area, it is vital to consider how AI-generated work is defined and managed in any governing agreement. Provisions on ownership, licensing, the assignment of rights, and warranties regarding non-infringement of third party rights should be drafted carefully. As AI's analytical abilities also raise novel privacy and data protection issues, clauses regarding the handling of user data should not be underestimated either.

Footnotes

1. See: Executive Office of the President, "Artificial Intelligence, Automation and the Economy" (December 2016).

2. See: Scherer M., "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies", Harvard Journal of Law and Technology, Volume 29(2) (2016).

3. See: Express Newspapers v. Liverpool Daily Post & Echo [1985] 1 W.L.R. 1089 and Nova Productions Ltd v. Mazooma Games Ltd [2006] EWHC 24.

4. Palmerini et al., "Regulating Emerging Robotic Technologies in Europe", p.19 (22 September 2014)

5. Asaro, P., "The Liability Problem for Autonomous Artificial Agents", Proceedings of the AAAI Symposium on Ethical and Moral Considerations in Non-Human Agents, Stanford University, Stanford, CA (21-23 March 2016).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.