On December 7, the federal Office of the Privacy Commissioner of Canada (OPC), jointly with all Canadian provincial and territorial privacy regulators, released new guidance entitled "Principles for responsible, trustworthy and privacy-protective generative AI technologies" (Principles). This new guidance interprets existing Canadian privacy legislation and principles in the context of generative AI, and it applies to businesses that develop, provide, and use generative AI systems.

While the Principles do not bind the regulators, its content is likely to influence future regulatory decisions, investigations, and policy statements.

What you need to know

  • While there are some distinct requirements for developers/vendors of generative AI systems compared to organizations that make use of these systems, both groups must generally comply with privacy law principles with respect to data governance, consent, and transparency.
  • The Principles emphasize the protection of vulnerable groups when developing, providing, or using generative AI, particularly ensuring that no discriminatory output is generated.
  • The Principles outline a number of concrete practices that help document generative AI compliance with privacy laws. They focus on preventing inappropriate uses of AI and ensuring that end users are provided with both sufficient information about systems they interact with and mechanisms to enforce their privacy rights.

Summary of principles for the responsible use of generative AI

The Principles are applicable to both public and private organizations, and they relate to the application of both public- and private-sector Canadian privacy laws. The Principles apply many existing requirements of Canadian privacy legislation to the use of generative AI.

Below, we have mapped the Principles to central privacy law requirements (including as reflected in Schedule 1 of PIPEDA) to demonstrate how the Principles will influence the application of privacy laws in novel contexts related to generative AI.

Key privacy law requirements

Corresponding recommended course(s) of action in the Principles

Requirement 1: Accountability

An organization is responsible for personal information under its control and shall designate an individual or individuals who are accountable for the organization's compliance with the following principles.

Accountability

Foundational accountability measures are required:

  • policies, practices and complaint mechanisms;
  • independent auditing for developers and providers to assess the system and mitigate privacy risks; and
  • vulnerability testing for AI training data that includes personal information.

A more operationally onerous requirement for developers and providers: ensure that generative AI outputs are "traceable and explainable," meaning that organizations or individuals using the system should know how it works and should be able to access a rationale for how it arrived at a particular output.

Requirement 2: Identifying Purposes

The purposes for which personal information is collected shall be identified by the organization at or before the time the information is collected.

Appropriate Purposes

Rather than identifying purposes, the Principles aim to ensure that the collection, use or disclosure of personal information associated with a generative AI system is for purposes that are appropriate to the circumstances. Organizations should:

  • monitor for potential inappropriate uses or biased outcomes;
  • take steps to mitigate risks when any such uses are identified; and
  • establish technical measures to prevent inappropriate uses.

Importantly, anticipated "no-go zones" include (but aren't limited to):

  • creating content for malicious purposes, including to generate intimate images of someone without their consent;
  • using chatbots to deliberately manipulate individuals into divulging personal information;
  • profiling that could lead to unfair, unethical, or discriminatory treatment;
  • generating and publishing defamatory material about an individual; and
  • any other collection/use/disclosure of personal information that could cause significant harm or threaten fundamental rights and freedoms.

Requirement 3: Consent

The knowledge and consent of the individual are required for the collection, use, or disclosure of personal information, except where inappropriate.

Legal Authority and Consent

Outputs about an individual from a generative AI system are still considered personal information (they are considered inferences about identifiable individuals). This means generating an output will be considered a collection of personal information for which consent or another legal authority would be required.

There can be generative AI contexts where information is so sensitive that consent, even if provided, is not adequate: in these contexts (e.g., healthcare), a separate review process with independent oversight that takes into account ethical and privacy considerations of using the information for generative AI systems should be implemented.

Requirements 4 and 5: Limiting Collection, Use, Disclosure and Retention

The collection of personal information shall be limited to that which is necessary for the purposes identified by the organization. Information shall be collected by fair and lawful means.

Personal information shall not be used or disclosed for purposes other than those for which it was collected, except with the consent of the individual or as required by law. Personal information shall be retained only as long as necessary for the fulfilment of those purposes.

Limiting Collection, Use and Disclosure

Public accessibility of data does not necessarily mean that it is "publicly available" for privacy law purposes or that it can be indiscriminately used.

Developers and providers should filter out personal information from data sets where possible and should ensure that AI outputs do not disclose unnecessary personal information.

User organizations should limit use of personal information in prompts and should not enter prompts with sensitive personal information without authorization.

Necessity and Proportionality

Principles additionally emphasize the importance of using generative AI only where necessary and proportionate to the needs of the organization.

Organizations should use anonymized data, de-identified data, or non-personal data rather than personal information to achieve their intended purposes with generative AI wherever possible.

Generally, organizations should be using the most privacy-protective technologies possible to achieve their stated purposes.

Requirement 6: Accuracy

Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.

Accuracy

User organizations have the responsibility to ensure that generative AI outputs are as accurate as necessary for the purpose for which they are being used, especially if they:

  • are used for decision-making about an individual;
  • are used in high-risk contexts (see "Vulnerable groups" below); or
  • will be released publicly.

Requirement 7: Safeguards

Personal information shall be protected by security safeguards appropriate to the sensitivity of the information.

Safeguards

All organizations should maintain safeguards to protect personal information throughout the lifecycle of a generative AI system.

The Principles identify the following as key data security threats for generative AI that should be protected against:

  • prompt injection attacks, in which specifically worded prompts allow users to bypass filters or use the system in unintended ways;
  • model inversion attacks, in which users can expose and therefore improperly access personal information contained in the system's training data; and
  • jailbreaking, in which privacy or security controls in the system are overridden by the user.

Requirement 8: Openness

An organization shall make readily available to individuals specific information about its policies and practices relating to the management of personal information.

Openness

Organizations are required to ensure that generative AI outputs that could have a significant impact on a person or group are "meaningfully identified" as being created by generative AI.

Organizations should inform individuals what, how, when and why personal information is handled at every stage of the generative AI's lifecycle.

User organizations should also clearly communicate when generative AI is used as part of a decision-making process, consistent with proposed federal law (Bill C-27) and existing Québec law (Law 25) on general automated decision-making.

Requirement 9: Individual Access

Upon request, an individual shall be informed of the existence, use, and disclosure of their personal information and shall be given access to that information. An individual shall be able to challenge the accuracy and completeness of the information and have it amended as appropriate.

Individual Access

Organizations that develop and provide AI systems must ensure that individuals can access or correct personal information contained within an AI model. Developers should consider whether they can operationalize this requirement, given the practical difficulties of altering or removing specific data from an AI model, or whether this will effectively prohibit the inclusion of personal information in training sets.

Also similar to proposed federal law (Bill C-27) and existing Québec law (Law 25) about automated decision-making, organizations that use generative AI for decision-making should maintain records to be able to fulfill related requests for access to information.

Requirement 10: Challenging Compliance

An individual shall be able to address a challenge concerning compliance with the above principles to the designated individual or individuals accountable for the organization's compliance.

This requirement, represented in PIPEDA as one of 10 key privacy principles, is not specifically addressed in the Principles, though the applications of the openness and access principles as outlined above require that individuals be given mechanisms to gain more information about decisions made about them using generative AI systems.

Vulnerable groups remain a special consideration

Guidance and best practices regarding the use of AI have consistently emphasized the importance of human rights and non-discrimination considerations in the development and deployment of AI, as we discussed in detail in our Guide to AI regulation in Canada. In this guidance, the regulators have made it clear that organizations have a responsibility to identify and prevent risks to vulnerable groups by ensuring the fairness of generative AI systems, especially for "highly impactful contexts" such as health care, employment, education, policing, immigration, criminal justice, housing or access to finance. Children and young people are identified to be at particularly high risk of significant negative impacts by generative AI.

Practical considerations for businesses that develop, provide, or use generative AI

Additional practical steps recommended by the Principles suggest that organizations should take the following steps.

  • Use adversarial or red team testing to identify potential inappropriate or "no-go zone" uses of generative AI systems
  • Implement appropriate use policies to which individuals or organizations using the generative AI system must agree in advance
  • Publish documentation about the datasets used to develop or train the generative AI system, including sources and the legal authority for its collection and use (for developers and providers)
  • Meaningfully identify generative AI outputs that could have a significant impact on a person or group as being created by generative AI
  • Conduct privacy impact assessments (and/or algorithmic impact assessments for government entities) to mitigate against potential or known privacy impacts
  • Disclose accuracy issues and limitations to users (for developers and providers); evaluate the impacts of accuracy issues or other limitations disclosed by the provider or developer of a generative AI system on whether the system should be used (for user organizations)
  • Allow individuals to access or correct personal information contained within an AI model
  • Ensure that a group is adequately and accurately represented in the system's training data if the system is going to be used in relation to that specific group
  • Implement safeguards that protect against novel data security threats for generative AI
  • Evaluate the data used to develop and train generative AI systems to ensure that the systems do not replicate or amplify "historical or present" biases in the data, or introduce new biases, to reduce the risk of discriminatory outcomes for marginalized groups based on race, gender, or other characteristics
  • Establish oversight and review of the outputs of the AI systems, or enhanced monitoring for potential discriminatory or other adverse effects

While these recommendations are not strict legal requirements themselves, they do align with AI best practices. Incorporating these practices as appropriate can help reduce risk associated with existing legal requirements and can facilitate future compliance in the dynamic AI regulatory environment. While dynamic in nature, the AI regulatory environment is starting to coalesce around certain core tenets, as evidenced by Canada's Bill C-27 (including recently proposed amendments) and the content of the recent agreement to the AI Act in the EU in December 2023. This coalescence makes the early adoption of best practices a more attractive option for many organizations despite uncertainty the specifics.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.