On March 31, 2023, Italy's privacy regulator (the "Garante"), announced an immediate temporary limitation on the processing of data of individuals residing in Italy by OpenAI's ChatGPT, the popular artificial intelligence (AI)-powered chatbot. The Garante expressed concerns that OpenAI has unlawfully processed people's data, lacks a system to prevent minors from accessing the technology, and, because of a computer bug, allowed some users to see titles from another active user's chat history, among other issues.

The Garante's order mentioned several areas of concern with ChatGPT, including allegations of the following: the lack of information provided to users whose data is collected by and processed through ChatGPT; the lack of an appropriate legal basis for the collection and processing of personal data for the purpose of training ChatGPT's underlying algorithms; the lack of an age verification protocol for ChatGPT users; and the inaccuracy of some of the data provided by ChatGPT. The Garante believes that ChatGPT is in violation of Articles 5, 6, 8, 13, and 25 of GDPR. OpenAI has 20 days to communicate to the Garante measures it is taking to comply with the alleged violations, or it might face a fine of up to four percent of its annual global revenue. In reaction, Sam Altman, the OpenAI CEO announced on Twitter that OpenAI has ceased offering ChatGPT in Italy for the time being, though they think they follow all privacy laws.

This action by the Garante follows other recent public expressions of concern about ChatGPT in both the United States and the European Union (EU). On March 30, 2023, the Center for Artificial Intelligence and Digital Policy filed a complaint with the U.S. Federal Trade Commission (FTC) asking the FTC to stop OpenAI from issuing new commercial releases of ChatGPT-4, OpenAI's most advanced system. Shortly thereafter, the European Consumer Organisation (BEUC) called on the EU and other national authorities to launch an investigation into ChatGPT and other similar chatbots. Earlier in March, the FTC issued several blog posts, warning companies not to exaggerate their claims regarding their use of AI and advising companies to consider the potential deceptive or unfair conduct that can arise from AI tools that generate synthetic media.

Separately, on March 29, 2023, over 1,000 technologists and researchers issued an open letter, urging technology labs to pause development of advanced AI systems, warning that AI tools present "profound risks to society and humanity."

Regulatory activity like that of the Garante and others is one of many legal and business risks associated with chatbots and generative AI. As companies experiment with using chatbots and integrating them into their business functions and services, they should recognize these attendant risks and seek to mitigate them.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.