AI heavily relies on vast amounts of data, often acquired through extensive data scraping processes. This data can contain sensitive information, introducing the possibility of an individual's private data being exposed in AI-generated outputs. Privacy laws have emerged as a primary mechanism for mitigating the inherent risks associated with AI's role in making decisions that carry legal and societal implications.

Emerging laws seek to ensure responsible usage of personal information by AI, with an emphasis on empowering individuals to control their own data when it is utilized in automated decision-making processes. Consequently, these developing legal frameworks also impose new responsibilities on organizations, necessitating assessment and compliance. Companies integrating artificial intelligence should be cognizant of the specific requirements stipulated by the relevant laws pertaining to their operations.

The following sections give an overview of existing privacy regulations and their intersection with emerging AI regulation in two prominent governments: the European Union and the United States. First, the EU's risk-based methodology, as delineated in the AI Act and the General Data Protection Regulation (GDPR), is explored. Second, the U.S. state-level regulation is examined with respect to California, Colorado, and Texas law. This section concludes with a discussion of the bipartisan American Data Privacy and Protection Act (ADPPA).

The EU's risk-based approach: AI and privacy by design

In April 2021, the European Commission proposed the first regulatory framework for AI: the AI Act. Negotiations have started and the objective is to achieve a consensus by the end of this year. The AI Act would require that companies analyze risk associated with their systems and develop protections based on their associated risk level. This may potentially reduce uncertainties that might otherwise obstruct the advancement of AI applications unnecessarily. However, the AI Act provides limited technical guidance and only briefly addresses data privacy implications.

The GDPR emphasizes "privacy by design and by default" to proactively prevent misuse through technology and organization. The GDPR, though not explicitly addressing AI, contains many provisions relevant to AI, some of which are challenged by the new ways AI enables processing of personal data. Traditional data protection principles — such as purpose limitation, data minimization, sensitive data handling, and automated decision restrictions — are in tension with the full computing potential of AI and big data.

The following outlines the GDPR provisions relevant to the AI Act's risk-based approach. Together, the two laws provide guidance for privacy compliance for AI applications.

Personal data: identification, identifiability, re-identification

Protection of personal data defined in the GDPR outlines the scope of the law. Article 4(1) explicitly defined personal data as "any information relating to an identified or identifiable natural person," where an identifiable natural person is one who can be identified, directly or indirectly, by reference to an identifier. Thus, the GDPR covers personal data related to an identified or identifiable natural person, excluding non-specific or effectively anonymized information. If the potential exists to identify the person in question, the information qualifies as personal data, even when not explicitly connected to an individual.

Identifiability depends on the availability of "means reasonably likely to be used" for successful re-identification. AI introduces two critical concerns: (1) the re-personalization of anonymous data and (2) the inference of additional personal information from existing data. Because AI tools synthesize enormous amounts of information, there is a risk that the algorithms will be able to re-identify the true owner from whom the anonymized personal data was collected. There is also a risk that AI tools will be able to predict, or infer, additional sensitive information from anonymized data based on captured trends. Controllers must recognize these risks in reconnecting anonymized data back to individuals and understand whether data can truly be anonymized in this context.

Responsibility of the controller

Article 24 of the GDPR mandates that the authorities or agencies who determine the purposes and means of the processing of personal data, i.e., controllers, establish and demonstrate data processing compliance through appropriate technical and organizational measures. In the context of AI, these measures involve ensuring adequacy and completeness of training data, assessing inference validity, and identifying causes of bias and unfairness.

Data protection by design and by default

Article 25 of the GDPR necessitates appropriate technical and organizational measures for determining the means of data processing. The GDPR also mandates that AI tools should only process the minimum necessary personal data for each specific purpose, including data quantity, processing extent, storage duration, and accessibility.

Data protection impact assessment

Articles 35 and 36 are particularly important for ensuring GDPR-compliant AI applications. The GDPR requires a data protection impact assessment (DPIA) for processes likely to pose a high risk to individuals' rights and freedoms, specifically when the process involves systematic and extensive automated profiling. These potential risks become particularly relevant when AI contributes to automated decision-making for individuals.

Codes of conduct and certification

The codes of conduct and certification requirements of Articles 40-43 are highly relevant to AI, given its associated risks and the limited legal guidance. Adhering to codes of conduct and certification can support a demonstration of compliance with controller obligations and the requirements for privacy by design. These mechanisms could address both algorithms and their application context.

However, certification requirements and codes of conduct — in combination with the requirement to demonstrate compliance — may lead to formalistic practices, rather than genuine data subject protection. Much of the development of code of conduct requirements will depend on the extent to which data protection authorities oversee these flexible legal instruments.

The United States: a call for federal regulation in the face of state-level law

In January 2023, the National Institute of Standards and Technology issued the AI Risk Management Framework (AI RMF) to provide guidance for using, designing, or deploying AI systems. The framework is voluntary, with no penalties for non-compliance.

Explicit privacy considerations are minimal: The AI RMF advises that broad privacy values such as anonymity, confidentiality, and control generally should guide choices for AI system design, development, and deployment. It emphasizes that AI systems can introduce new privacy risks by enabling the identification of individuals and uncovering previously private information.

Despite the absence of comprehensive federal privacy regulation, new state privacy laws continue to emerge. The following is a summary of key takeaways for companies currently employing or intending to implement AI in their operations:

1. California Privacy Rights Act

The California Privacy Rights Act (CPRA), effective Jan. 1, 2023, amends and expands the California Consumer Privacy Act (CCPA) with provisions impacting AI, including stricter limits on data retention, sharing, and handling of sensitive personal data. The CPRA introduces a new "profiling" definition, giving consumers opt-out rights regarding "automated decision-making technology" used by businesses.

The California Privacy Protection Agency (CPPA) is tasked with issuing regulations governing access and opt-out rights related to this technology, including providing transparent information about decision logic and likely outcomes. Importantly, the CPPA's mandate is wide- ranging and not limited to solely automated decisions or those with legal effects. As of now, the CPPA sought public comment but has not issued rules regarding automated decision-making.

In the initial phase of regulatory efforts, the CPPA is focusing on formulating a definition of "automated decision-making technology" (ADMT), aiming to expand the scope of the definition as extensively as feasible. Under the current proposal:

"Automated Decisionmaking Technology means any system, software, or process — including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques — that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision making. ADMT includes profiling."

2. Colorado Privacy Act

The Colorado Privacy Act (CPA) permits consumers to opt out of profiling for automated decisions. It also mandates a data protection assessment for activities with a "heightened risk of harm," including targeted advertising and specific profiling methods.

3. Texas Data Privacy & Security Act

The Texas Data Privacy & Security Act (TDPSA) allows consumers to opt out of targeted advertising and profiling. It also mandates data protection assessments for specific controllers. For controllers with de-identified data, data from which explicit identifiers of an individual have been removed, the TDPSA requires reasonable efforts to prevent re-identification, wherein data is reattributed to an individual, a public commitment to responsible use, and contractual obligations for data recipients to comply with TDPSA provisions.

4. Risk assessments

Similar to GDPR Articles 35 and 36, the CPA, TDPSA, and CPRA amendments require data controllers to conduct DPIAs for processing activities posing a "heightened risk of harm to a consumer." These risk activities, much like the AI Act's risk-based approach, typically encompass selling personal information, targeted advertising with personal data, and personal data processing for profiling involving foreseeable risks such as unfair treatment, financial, physical, or reputational harm, intrusion on privacy, or substantial injury to the consumer.

These impact assessments must identify and evaluate the risks and benefits of the processing for consumers, the controller, other stakeholders, and the public. These assessments are not public but must be made available to the state attorney general upon request, pursuant to an investigative civil demand.

5. Federal-level movement: the American Data Privacy & Protection Act

Recent federal privacy legislation, including the bipartisan American Data Privacy and Protection Act (ADPPA), contains provisions addressing algorithmic accountability and fairness. The ADPPA, reported by the House Energy & Commerce Committee in July 2022 and set for reintroduction this year, features AI and privacy-related provisions.

(1) The bill imposes limits on personal information collection, use, and sharing, mandating that such actions be "necessary and proportionate" for providing products, services, or other specified purposes.

(2) The ADPPA bolsters civil rights protections in the digital realm by prohibiting discriminatory use of personal information, echoing the White House's AI Bill of Rights' concerns about bias in algorithms perpetuating historical inequities.

(3) The civil rights section also includes provisions for algorithmic assessment and accountability, mirroring state-level initiatives to enhance transparency and accountability for AI systems.

On March 1, 2023, the House Subcommittee on Innovation, Data, and Commerce held a hearing to reintroduce the bill.

On Oct. 30, 2023, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order calls on Congress to pass bipartisan data privacy legislation to protect all Americans, including prioritizing the use of privacy-preserving techniques and strengthening privacy guidance for federal agencies.

Conclusion

Many existing privacy regulations impact data processing within AI systems, while some emerging AI regulations and guidance emphasize adherence to existing privacy regulation. The potential synergy between AI and privacy regulations and their future implications on businesses provides an analytic framework through which to approach responsible AI governance.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.