Artificial intelligence (AI) is evolving at a phenomenal speed. Early adopters across industries, whether in retail, healthcare, financial services will have first opportunities to harness the potential benefits, whilst businesses which do not embrace this pioneering technology risk to be left behind. Despite the potential magnitude of benefits which may be culminated through the adoption of generative AI, implications for incorporating generative AI tools in your business must be carefully considered.

This article highlights high level issues you ought to consider when planning your generative AI strategy. In particular, we have reviewed terms and conditions published by select generative AI platforms and tools and summarised the contractual considerations that should be taken into account when engaging with a generative AI platform. The terms referred to in this article represent the version which was reviewed by us. These terms may have been updated subsequent to our review. Always seek legal advice with respect to the specific terms into which you propose to enter.

What is generative AI and how can businesses use generative AI?

Broadly speaking, generative AI is a form of machine learning that uses algorithms trained on large data sets to generate contents from prompts written in natural language. Businesses can use generative AI to enhance creativity and innovation across multiple functions, such as assisting content creators, such as writers, graphic designers and musicians with their creative endeavours; boosting operation efficiency in repetitive tasks by synthesising and analysing large sets of data to create automated reports and documents; or in the form of virtual assistants or chatbots, assisting businesses to communicate and engage with customers, generating tailored personalised responses and recommendations to customers based on their needs.

So, what are the issues and risks that businesses should consider when embracing a generative AI strategy? This article provides a high level checklist of issues which should be considered, although these issues are by no means exhaustive, as the technology continues to advance at a rapid pace.

1. IP ownership

The Terms of Use published by OpenAI, the company behind ChatGPT, provides that any "outputs" created by its services that is based on "inputs" provided by a user will be assigned to that user, provided that the user complies with OpenAI's Terms of Use. The user may therefore use such outputs for any purpose, including commercial purposes such as sale or publication, if the user complies with the terms. OpenAI may use the outputs as may be required to provide and maintain its services to the user, comply with applicable law, and enforce its policies.

Similarly, another AI platform service provider, Grammarly, which offers a digital writing assistance tool based on AI and natural language processing, does not claim ownership on any output created when a user uses any of the generative AI features.

Under the Microsoft Terms of Use relating to Microsoft Azure, Microsoft also does not claim ownership of the materials provided or inputted to Microsoft. However, by inputting or providing material to Microsoft, the user grants to Microsoft permission and licence rights to use the material such as copying, distributing, reproducing the material.

Even though the AI platforms do not claim ownership to the outputs, it does not mean that the user will automatically own the intellectual property in the outputs created by the AI platform under the prompts of the user. As elucidated in our earlier article about IP trends in 2024, the position on whether such outputs will have the necessary qualities to be afforded intellectual property protection is uncertain. The current state of play in Australia is that, an AI machine may not be considered to be an "inventor" within the meaning of that term under our Patents Act 1990 (Cth) (Thaler); and our Copyright Act 1968 (Cth) provides that a "qualified person" capable of holding copyright must be "an Australian citizen or a person ... resident in Australia or a body corporate incorporated under a law of the Commonwealth or of a State". Whilst this position may not confer a strong IP ownership position, at a minimum, the user may be confident that exploitation of the outputs will not infringe IP rights of the AI platform provider. Whether or not to treat an AI machine to be an "author" under our copyright legislation will inevitably become an issue in Australia. However, to the extent that the AI platform generates content based on third party data, text, image or code, the question of whether exploitation of that output will infringe IP of others will need to be explored.

2. IP infringement

AI systems that are trained on data, text, image or code that are protected by third party IP rights may infringe on such IP rights, even if such materials are publicly available. AI platform developers and users may be held liable for any IP infringements committed by the system. There are multiple proceedings in the US brought by original content creators, including writers and artists against AI developers for copyright infringement, alleging, amongst others, that copyright is infringed when the developers use the works to train AI models without licence or permission, and then use such works to create derivative works, and publishing and distributing such works. Whilst the legal system is being asked to consider the bounds of derivative works, and AI developers may be looking into ensuring that they are respectful of IP rights of third parties when using their contents to train its AI tool, it would be for the user to consider its own risks when deploying AI systems in creating contents.

The business terms of OpenAI provides an indemnity to its users for "any damages finally awarded by a court of competent jurisdiction and any settlement amounts payable to a third party arising out of a third party claim alleging that the Services (including training data ...[used] to train a model that powers the Services) infringe any third party intellectual property right", which are subject to some exceptions. Importantly, the liability cap does not apply to this indemnity. No such indemnity is provided for individual users, and one of the exceptions to OpenAI's indemnity is that the indemnity does not apply to any input or any training data provided to OpenAI by the user. In this regard, the user is responsible for its input, including ensuring that it does not violate any applicable law or the terms of use. Under the Terms of Use, the user represents and warrants that they have all rights, licenses, and permission to provide input to OpenAI's services.

To the extent a user uses AI services provided by Microsoft, the Microsoft Terms of Use provides that the user is solely responsible for any third-party claims regarding their use of AI services in compliance with applicable laws including copyright infringement and other claims relating to content output. Microsoft may have different arrangements with businesses, however the terms of their customer agreements are not publicly available.

3. Inaccuracies, misrepresentations and AI hallucinations

There are inherent risks associated with the use of generative AI to produce output. It is well documented that AI models may generate false, inaccurate or misleading information, a phenomenon known as AI hallucinations. For example, OpenAI's Terms of Use include a disclaimer regarding the accuracy of their AI-generated outputs, as follows:

"Output may not always be accurate. You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice. You must evaluate Output for accuracy and appropriateness for your use case, including using human review as appropriate, before using or sharing Output from the Services."

In addition, users of OpenAI platforms will need to "accept and agree that any use of outputs from [the] service is at [the user's] sole risk and [the user] will not rely on output as a sole source of truth or factual information, or as a substitute for professional advice". OpenAI also prohibits the user from representing that the output was human-generated when it was not.

The bottom line is that users of generative AI cannot afford to rely on AI platforms to generate accurate responses all the time. Reckless use of AI created contents may extend beyond spreading misinformation. AI tools should never be used as a replacement for professional judgement and advice.

To the extent a service provider is using AI tools in the course of providing services, we are aware that some corporate customers impose an express duty on the service provider to verify the accuracy and reliability of any content generated by AI models in their engagement contracts, including a "thorough review of the generated content and cross-checking against reliable sources". In addition, any service provider using AI solutions will need to disclose their use when providing work products.

As such, claims arising from consequences of AI hallucinations and other inaccuracies will likely remain with the organisation that selects to use the AI platforms.

4. Privacy

In this digital age, the arena of privacy law has taken on a new dimension – privacy law is no longer only concerned with keeping personal information confidential, but governs how personal information is collected, used, controlled and shared. If your business is deploying an AI tool, it is important that both the AI platform provider and your business comply with privacy regulations.

Generally, the privacy policies of the AI platform providers that we surveyed have similar terms and conditions, including how they collect, store and share personal information. However, these generally apply to personal information that your business discloses to the AI platform, such as when you create an account with them.

The more significant issue is that when your business is deploying an AI solution, how do you ensure that the AI solution adopted would comply with the relevant privacy legislative regimes? Naturally, data collection by the AI tool is a primary concern, as AI platforms are dependent on substantially large data sets for training. How does your organisation determine an acceptable use policy for your AI tool, which comply with privacy legislation and respecting the privacy of your customers? In addition, the collected personal information may be used in a way that extend beyond the purpose for what was originally knowingly disclosed by an individual. For example, staff in Perth's South Metropolitan Health Service (SMHS) were found to be using AI platforms such as ChatGPT to write medical notes for patient records. The SMHS had to mandate it staff not to use AI bot technology due to concerns with the disclosure of sensitive health service information. This highlights the importance of every organisation to provide guidelines to its personnel with respect to AI usage.

None of the terms of use we surveyed addressed this issue. If your business intends to deploy an AI tool which will collect personal information, it is essential that you work with your AI developer to agree on a robust plan to determine acceptable collection and use of personal information, in compliance with relevant privacy law and enter into an appropriate agreement to reflect such position.

5. Confidentiality and Data Protection

Your business may wish to engage an AI developer to develop an AI model that is trained using proprietary information or data sets owned by your business. For example, a research organisation may provide archives of laboratory books for a chat bot to act as interactive interface with its personnel. In this circumstance, it would be important to have all such information be kept strictly confidential.

In a scenario such as above, it is likely that your organisation will enter into a specific contractual arrangement with the AI developer, and it is paramount that such contracts contain stringent confidentiality provisions. If your organisation permits personnel to use publicly available AI tools, it is important to require them not to provide prompts that include confidential information.

Another important consideration will be how the AI developer secures its data. Of the AI providers surveyed, the providers generally have security protection measures which include encrypting data at rest and in transit. Microsoft Azure utilises data segregation and in the case of a cyberattack or physical damage to the datacentre, Microsoft offers its customers the option to replicate data within a selected geographic area for redundancy. Grammarly secures its infrastructure through a cloud platform by hosting hosts data with Amazon Web Services on their US-based data centres. Similar to Microsoft Azure, Grammarly's cloud platform includes data isolation from other user's data and the protection of a web application firewall. In the event of a security incident or data breach involving customer data, the AI providers will send a notification to the customer. OpenAI also provides an on-call rotation all year round for any potential security incident. Despite these security measures, we refer to the previous security incidents such as the ChatGPT data breach in March 2023, Microsoft security breach in July 2023 and research from Salt Security, an API security company, highlighting Grammarly's sign-in vulnerabilities. It is essential for your business to understand the security risks when providing data to AI providers, and to have your own security measures in place.

***

This article highlights some of the issues that your business should consider prior to your proposed generative AI use. It is important to assess the risks associated with use, ensure that your agreement with your AI developer addresses those issues. It is also recommended that your business develop an acceptable use policy for generative AI and a plan to improve data security.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.