Introduction

The ubiquity of artificial intelligence (AI) is a phenomenon once only read about in science fiction novels, but the rapid evolution of AI-based technologies in recent years and the enthusiasm generated by the impressive capabilities of ChatGPT suggest that this fiction may very soon become a reality. AI is an exciting technology that is revolutionizing many industries, but it entails some unique risks and challenges that governments and lawyers must adapt to and address quickly.

Although AI is already used in many products and services used by consumers and businesses (i.e. industrial robots, virtual customer service assistants, social media algorithms, self-driving vehicles, etc.), there are still no specific civil liability rules addressing AI systems in Québec. This said, the federal government has taken the lead in tabling legislation to regulate AI development and marketing activities in Canada, namely Bill C-27, the Artificial Intelligence and Data Act (AIDA), which would regulate private sector design, development and use of AI systems in interprovincial and international trade, primarily in order to mitigate the risks of harm associated with "high impact" systems.1

While the AIDA would impose penal sanctions on persons acting recklessly in the development, marketing or operating of AI systems and the violation of an AIDA standard could possibly constitute a civil fault, courts dealing with AI-related civil litigation in Québec will have to rely on the general principles of civil liability arising from provincial legislation and case law when adjudicating these disputes. This flows from the fact that section 92(13)1 of the Constitution Act, 1867 gives the provincial government exclusive authority to legislate on matters relating to property and civil rights in the province, even though the federal government may enact laws affecting commerce in the private domain under its own jurisdiction, including trade and commerce, national defence, copyright and criminal law.

To date, the only regulation adopted by the Québec legislature concerning AI that contains any rules regarding civil liability is the Autonomous Bus and Minibus Pilot Project,2 which sets out certain obligations for manufacturers, distributors or operators of autonomous buses and minibuses participating in the pilot project set up by the Government of Québec. Evidently, its application is extremely specific and limited.

As such, and until laws and regulations are enacted to specifically regulate the development and marketing of AI systems, courts hearing civil liability cases involving AI systems will continue to rely on the existing legal framework. In Québec, such actions could be based on the product liability rules provided by the Civil Code of Québec (the "CCQ") and the Consumer Protection Act (the "CPA"),3 even though AI raises certain issues that do not mesh easily with existing principles, which were for the most part established even before the advent of the Internet.

While it is to be expected that companies involved in marketing AI systems will include limitation of liability clauses, warranty exclusions and warnings about the risks associated with using their systems, it is particularly important to also understand the applicable and unique product liability principles that may apply, many of which are provisions of public order which cannot be waived or excluded by contractual clauses.

Without claiming to have a crystal ball allowing us to predict the future, we will discuss what we see as some of the specific AI issues as they relate to Québec product liability laws with a view to anticipating and hopefully minimizing the risk of litigation.

Can the commercialization of an AI system be considered the sale of a "good" (property) or a "product"?

Since the possible uses of AI are countless, the answer to the question of whether an AI system can be considered a "good", "property" or a "product" within the meaning of Québec civil law will necessarily depend on the facts of each case. In most cases, it will be important to distinguish between AI systems embedded in physical goods (e.g., self-driving car) and AI systems embedded in software.

If we take the example of a self-driving car, it will likely be considered that the autonomous driving system, as impressive and sophisticated as it may be, is just one more feature in a vehicle that contains a number of other automated systems (emissions management systems, antilock braking systems, etc.). Thus, the sale of a physical good with a built-in AI system will be subject to the rules set out in the Civil Code of Québec (CCQ) and the Consumer Protection Act (CPA), just as the sale of any other physical good would be.

On the other hand, where the AI system is used in software provided to a customer who is free to use it under certain terms and conditions in exchange for periodic payments, it will normally be under a licence agreement.4 This type of agreement differs from a sales contract in that the party granting the licence does not typically transfer its intellectual property rights to its software to the user, but rather grants the user the right to use it subject to certain conditions. In this sense, the licence agreement is more like a rental agreement than a sales agreement. Since this type of agreement is considered an innominate contract under Québec civil law, it is not subject to the rules specific to contracts of sale, including the all-important legal warranties of quality provided for in the CCQ and the CPA.

In those cases where the AI system is only used as a tool to facilitate the performance of services provided to customers (virtual customer service assistant, social media, etc.), it will generally be as part of a service contract. Similarly, the liability of a developer that provides consulting and integration services for AI-based software custom-developed for its client's business will normally have to be assessed in light of the obligations arising from its service contract and, in cases where the developer reserves the intellectual property rights to the solution, from the licence agreement.

This being said, we note that a recent Superior Court judgment authorizing a class action against the developers of the video game Fortnite (Fortnite decision), which was based in part on the plaintiffs' claim that the defendants had designed an "addictive" game, suggests that the principles of product liability under the CCQ and the CPA could apply to a video game since it would be "property" within the meaning of articles 899 and 907 of the CCQ.5 We also note, however, that this judgment is not precedent-setting since the Court has not yet ruled on this issue at the merits stage. This is the first time that a Québec court has suggested that software or video game developers may be subject to the same obligations and responsibilities as manufacturers of physical goods with respect to latent defects or safety defects, including failure to adequately warn about certain dangers associated with products.6

Surprisingly, the concept of "property" is not defined in the CCQ or the CPA, and the courts have so far had little opportunity to establish the concept's limits in the digital realm. Incorporeal property includes all the personal rights that may form part of a person's patrimony, such as rights of claim and intellectual property rights.7 Property must be "separate" and may be appropriated.8 It would thus appear seldom possible to consider software developers' and sellers' liable to their customers from the perspective of products liability, particularly since there is no assignment of intellectual property rights to the software and because active support is required from the developer to keep the software running and to perform updates.

Moreover, and even if software were to be considered "property", it is not clear that a software or video game developer can be considered a "manufacturer" within the meaning of the CCQ and the CPA. The CPA defines a manufacturer as "a person in the business of assembling, producing or processing goods".9 This definition has been used in certain decisions involving product liability under the CCQ.10 In particular, the courts take into account the defendant's level of involvement in the product's design in determining whether the defendant can be considered a manufacturer.11

Unsurprisingly, the analysis of the qualification a company that sold or performed work on goods as manufacturer, distributor, contractor or service provider, has generally been made with physical goods in mind. Their application to software, video games or other computer systems has not been the subject of any detailed analysis or jurisprudence and thus remains unclear. This said, in the Court of Appeal decision of Ferme avicole Héva inc. v. Coopérative fédérée de Québec (portion assurée),12 it was held that, for the purposes of the presumption of knowledge of a latent defect, an agricultural producer must be considered a manufacturer despite the fact that its activities do not fit the conventional definition of the concept, owing to its level of control over production activities, the impact of its decisions on the quality of the product, and the leading role it plays in the area of food safety. It is conceivable the same sort of reasoning could be applied to a software or video game developer in order to apply the manufacturer or distributor label to them.

With this said, and despite the many unknowns and the difficult application of general principles to the new and evolving AI systems discussed above, developers, sellers and users of AI systems may nonetheless be liable if they breach obligations arising from a contract,13 fail in their general duty not to cause harm to others,14 or violate fundamental rights under the Québec Charter of Human Rights and Freedoms (Charter).15 Despite all the legal nuances discussed above regarding the characterization of the role played by a developer of an AI system, it is highly likely that individuals who believe themselves to be victims of an AI system's malfunction will attempt to hold its developers liable for damages. In defending against such lawsuits, developers will increase their chances of success if they can prove that they complied with applicable regulatory standards and advised users of the limitations, risks and hazards of using their system.

Liability for safety defects and the obligation to disclose the risks of using the product

As set out in the previous section, it follows that manufacturers and all other parties involved in the chain of distribution of goods with a built-in AI system will be bound to make reparation for injury caused to a third person by reason of a safety defect.16. Pursuant to the CCQ17, a thing has a safety defect where, having regard to all the circumstances, it does not afford the safety which a person is normally entitled to expect, particularly by reason of a defect in design or manufacture, poor preservation or presentation, or the lack of sufficient indications as to the risks and dangers it involves or as to the means to avoid them. Despite all the precautions that may be taken by developers to reduce the risk of accidents, physical goods that use AI will necessarily involve risks that must be properly identified and clearly disclosed to users.

In light of the requirements of the CCQ, the courts will likely take the view that an AI system embedded in a physical product must include certain safety mechanisms to protect users and the public from risks associated with the malfunction or misuse of the system.

Since there are no specific safety standards for AI systems, and since even developers are still thinking about the safety mechanisms that should be incorporated into their systems to prevent physical harm, false information or a violation of fundamental rights, it is very difficult to predict how the courts will address these issues. Because of the novelty and complexity of these systems, it is conceivable that the courts will be reluctant to recognize the existence of a safety defect in an AI system in the absence of expert evidence.

Without binding safety standards, courts and experts may be inclined to refer to standards adopted by governments, self-regulatory organizations or alliances of large AI companies to determine the scope of the general duty of care of developers, sellers and users of AI systems in civil liability claims. For example, courts may draw inspiration from the Montréal Declaration for a Responsible Development of Artificial Intelligence18(the "Montreal Declaration"), which provides for several ethical principles which, according to its authors, should govern the development and use of AI systems. To date, over 200 organizations and businesses have signed the Montreal Declaration, including several private businesses in the AI industry. Conceivably, the failure of a person having voluntarily adhered to policy statements such as the Montreal Declaration to abide by its principles could be taken into account to determine whether such person's conduct constitutes a fault triggering its civil liability for the damages resulting from such fault.

While it is plausible that approval processes and binding safety standards will be implemented for AI systems before they are allowed to be marketed in sectors that are already regulated (motor vehicles, health care, etc.), the advent of such standards will not eliminate the uncertainty surrounding the liability of manufacturers or distributors of such goods for safety defects. As reiterated by the Court of Appeal in Imperial Tobacco Canada ltd. v. Conseil québécois sur le tabac et la santé,19 a manufacturer which complies with standards is not thereby relieved of its duty to inform or deemed to have complied with it, nor is it relieved of the liability that it may incur in the event that the information provided, even if it conforms to said standards, does not accurately, comprehensibly and completely reveal the hazard inherent in the product.

Consequently, manufacturers of products with built-in AI systems, like any other manufacturer, will need to take great care in preparing the notices in contractual documents, user manuals and labelling to ensure that the risks associated with the use of their products and the means to guard against them are fully and adequately disclosed.

We note that the AIDA would require developers of AI systems sold to the public to publish on a publicly accessible website a description of the system, which would include the following:

  1. the use made of it;
  2. the content it generates or the predictions, recommendations or decisions it makes;
  3. mitigation measures put in place in accordance with section 8 of the AIDA to identify, assess and mitigate the risks of harm or biased output that could result from the use of the AI system;
  4. any other information prescribed by regulation.

While not an exhaustive guide to the information that should be disclosed to users of AI systems based on the requirements of Québec civil law, these proposed rules may provide a preview of the scope of the duty to inform that manufacturers of goods with built-in AI systems will face.

Footnotes

1. We invite you to read a recent bulletin by our colleagues Christopher Ferguson, Justin P'ng, Heather Whiteside and Kassandra McAdams-Roy that discusses the ins and outs of this bill and possible comparisons with the European Commission's proposed AI legislation: https://www.fasken.com/en/knowledge/2022/10/18-the-regulation-of-artificial-intelligence-in-canada-and-abroad.

2. CQLR c. C-24.2, r. 37.01.

3. For those interested in understanding the basic principles of product liability in Québec, we invite you to consult our bulletin entitled "What Lawyers, Manufacturers and Sellers Need to Know about Product Liability Laws in the Province of Québec": https://www.fasken.com/en/knowledge/2020/09/15-what-you-need-to-know-about-product-liability-laws-quebec.

4. ADP Canada Co. v. 9187-5674 Québec inc., 2011 QCCS 1388 (CanLII), paras. 56-57; Melric ltée v. Pépinière Lanctôt & Frères inc, 2015 QCCS 4205 (CanLII), paras. 40-41.

5. F.N. v. Epic Games Canada, 2022 QCCS 4551 (CanLII).

6. For more on the Fortnite decision, please see the following bulletin entitled: "Fortnite Addiction Class Action Approved by Quebec Superior Court": https://www.fasken.com/en/knowledge/2023/01/23-fortnite-addiction-class-action-approved-by-quebec-superior-court

7. Caisse populaire Desjardins de Val-Brillant v. Blouin, 2003 SCC 31 (CanLII), [2003. 1 SCR 666; Groupe Commerce compagnie d'assurances v. H. Prud'homme & Associés inc., 1998 CanLII 19396 (QC CA).

8. Gu v. Chen, 2018 QCCS 4264 (CanLII).

9. CPA, s. 1(g).

10. Horecki v. Beaver Lumber Co., [1991. R.R.A. 234 (C.S.), pp. 41-42; Ferme avicole Héva inc. v. Coopérative fédérée de Québec (portion assurée), 2008 QCCA 1053.

11. Groupe Royal inc. v. Crewcut Investments Inc., 2019 QCCA 1839, paras. 64-74; Royal & Sun Alliance du Canada société d'assurance v. Progaz DMN inc., 2022 QCCS 1006 (CanLII).

12. 2008 QCCA 1053.

13. CCQ, a. 1458.

14. CCQ, a. 1457.

15. CQLR, c. C-12.

16. Article 1468 CCQ.

17. Article 1469 CCQ

18. https://www.montrealdeclaration-responsibleai.com/.

19. Imperial Tobacco Canada ltd v. Conseil québécois sur le tabac et la santé, 2019 QCCA 358 (CanLII), para. 491.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.