A quick update on a new development with OpenAI's ChatGPT. One of the concerns raised by users of ChatGPT is the ability of OpenAI to use queries for the training of the GPT model, and therefore potentially expose confidential information to third parties. In our prior post on ChatGPT risks and the need for corporate policies, we advised that if an organization using ChatGPT was concerned with this confidentiality issue it could use ChatGPT's opt-out form to exclude inputs from the training process. On April 25, 2023, OpenAI made the opt-out process easier, and announced that it has given users a new settings option to disable ChatGPT from showing chats in a user's history sidebar and from using chats to improve ChatGPT via model training. The announcement noted, however, that even if this option is selected, OpenAI will still retain conversations for thirty days and "review them only when needed to monitor for abuse, before permanently deleting." Users can find a toggle switch on the Settings menu, under "Data Controls." In addition, there is a new "Export" option in settings that allows users to export their ChatGPT data and receive a copy of it via email.

While prior to this development, users could elect to exclude their inputs from model training, some found the process of submitting the opt-out form to be cumbersome. This new switch, which simplifies the process considerably, will allow more users to take advantage of the confidentiality feature.

OpenAI Eases Procedure To Opt-Out Of Inputs Being Used For Training Purposes

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.