The Office of the Privacy Commissioner of Canada ("OPC") has released a Consultation Paper on artificial intelligence ("AI"), saying that it is of the opinion that "responsible innovation involving AI systems must take place in a regulatory environment that respects fundamental rights and creates the conditions for trust in the digital economy to flourish. "

The OPC intends to examine AI in the context of the legislative reform policy analysis as it relates specifically to the Personal Information Protection and Electronic Documents Act ("PIPEDA"). The OPC is clear that it has concerns about AI, stating:

"In our view, AI presents fundamental challenges to all PIPEDA principles and we have identified several areas where the Act could be enhanced."

Comments are due March 13, 2020, giving very little time for stakeholders to provide input on what are some very complex issues.

The proposals

The Consultation Paper, called Consultation on the OPC's Proposals for ensuring appropriate regulation of artificial intelligence, contains a eleven proposals for consideration, and then asks a number of specific questions related to each proposal. In sum, the proposals amount to a call by the OPC to treat privacy as a human right, require AI to adopt certain protections as a result, and create special rules for AI that go beyond the current privacy framework.

Some of the more business-friendly proposals include establishing alternative grounds for the processing of personal information, and establishing rules that allow for flexibility in using personal information that has been rendered non-identifiable.

Proposal 1: Incorporate a definition of AI within the law that would serve to clarify which legal rules would apply only to it, while other rules would apply to all processing, including AI.

  • Should AI be governed by the same rules as other forms of processing, potentially enhanced as recommended in this paper (which means there would be no need for a definition and the principles of technological neutrality would be preserved) or should certain rules be limited to AI due to its specific risks to privacy and, consequently, to other human rights?
  • If certain rules should apply to AI only, how should AI be defined in the law to help clarify the application of such rules?

Proposal 2: Adopt a rights-based approach in the law, whereby data protection principles are implemented as a means to protect a broader right to privacy—recognized as a fundamental human right and as foundational to the exercise of other human rights

  • What challenges, if any, would be created for organizations if the law were amended to more clearly require that any development of AI systems must first be checked against privacy, human rights and the basic tenets of constitutional democracy?

Proposal 3: Create a right in the law to object to automated decision-making and not to be subject to decisions based solely on automated processing, subject to certain exceptions

  • Should PIPEDA include a right to object as framed in this proposal?
  • If so, what should be the relevant parameters and conditions for its application?

Proposal 4: Provide individuals with a right to explanation and increased transparency when they interact with, or are subject to, automated processing

  • What should the right to an explanation entail?
  • Would enhanced transparency measures significantly improve privacy protection, or would more traditional measures suffice, such as audits and other enforcement actions of regulators?

Proposal 5: Require the application of Privacy by Design and Human Rights by Design in all phases of processing, including data collection

  • Should Privacy by Design be a legal requirement under PIPEDA?
  • Would it be feasible or desirable to create an obligation for manufacturers to test AI products and procedures for privacy and human rights impacts as a precondition of access to the market?

Proposal 6: Make compliance with purpose specification and data minimization principles in the AI context both realistic and effective

  • Can the legal principles of purpose specification and data minimization work in an AI context and be designed for at the outset?
  • If yes, would doing so limit potential societal benefits to be gained from use of AI?
  • If no, what are the alternatives or safeguards to consider?

Proposal 7: Include in the law alternative grounds for processing and solutions to protect privacy when obtaining meaningful consent is not practicable

  • If a new law were to add grounds for processing beyond consent, with privacy protective conditions, should it require organizations to seek to obtain consent in the first place, including through innovative models, before turning to other grounds?
  • Is it fair to consumers to create a system where, through the consent model, they would share the burden of authorizing AI versus one where the law would accept that consent is often not practical and other forms of protection must be found?
  • Requiring consent implies organizations are able to define purposes for which they intend to use data with sufficient precision for the consent to be meaningful. Are the various purposes inherent in AI processing sufficiently knowable so that they can be clearly explained to an individual at the time of collection in order for meaningful consent to be obtained?
  • Should consent be reserved for situations where purposes are clear and directly relevant to a service, leaving certain situations to be governed by other grounds? In your view, what are the situations that should be governed by other grounds?
  • How should any new grounds for processing in PIPEDA be framed: as socially beneficial purposes (where the public interest clearly outweighs privacy incursions) or more broadly, such as the GDPR's legitimate interests (which includes legitimate commercial interests)?
  • What are your views on adopting incentives that would encourage meaningful consent models for use of personal information for business innovation?

Proposal 8: Establish rules that allow for flexibility in using information that has been rendered non-identifiable, while ensuring there are enhanced measures to protect against re-identification

  • What could be the role of de-identification or other comparable state of the art techniques (synthetic data, differential privacy, etc.) in achieving both legitimate commercial interests and protection of privacy?
  • Which PIPEDA principles would be subject to exceptions or relaxation?
  • What could be enhanced measures under a reformed Act to prevent re-identification?

Proposal 9: Require organizations to ensure data and algorithmic traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle

  • Is data traceability necessary, in an AI context, to ensure compliance with principles of data accuracy, transparency, access and correction and accountability, or are there other effective ways to achieve meaningful compliance with these principles?

Proposal 10: Mandate demonstrable accountability for the development and implementation of AI processing

  • Would enhanced measures such as those as we propose (record-keeping, third party audits, proactive inspections by the OPC) be effective means to ensure demonstrable accountability on the part of organizations?
  • What are the implementation considerations for the various measures identified?
  • What additional measures should be put in place to ensure that humans remain accountable for AI decisions?

Proposal 11: Empower the OPC to issue binding orders and financial penalties to organizations for non-compliance with the law

  • Do you agree that in order for AI to be implemented in respect of privacy and human rights, organizations need to be subject to enforceable penalties for non-compliance with the law?
  • Are there additional or alternative measures that could achieve the same objectives?

The Notice of Consultation and the Consultation Paper can be found on the OPC's website.

About Dentons

Dentons is the world's first polycentric global law firm. A top 20 firm on the Acritas 2015 Global Elite Brand Index, the Firm is committed to challenging the status quo in delivering consistent and uncompromising quality and value in new and inventive ways. Driven to provide clients a competitive edge, and connected to the communities where its clients want to do business, Dentons knows that understanding local cultures is crucial to successfully completing a deal, resolving a dispute or solving a business challenge. Now the world's largest law firm, Dentons' global team builds agile, tailored solutions to meet the local, national and global needs of private and public clients of any size in more than 125 locations serving 50-plus countries. www.dentons.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances. Specific Questions relating to this article should be addressed directly to the author.