What You Need to Know

Key takeaway#1

The USPTO has taken an approach focused on human governance and technical mitigations

Key takeaway#2

The new guidance reminds practitioners of the relevant ethical considerations and Patent Office rules that are implicated by the use of AI

On April 11, 2024, the USPTO published its "Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States" in the Federal Register (the "Guidance"). As the title suggests, the document provides additional opinions and guidance from the USPTO on the use of AI tools for prosecuting patent and trademark applications before the USPTO.

While the Guidance specifically recognizes the potential for AI tools to bring great benefits to society, it also recognizes the need to "cabin the risks from the use of AI in practice" through "human governance" and "technical mitigations." Those two checks on AI serve as the overall theme of the guidance.

The document first outlines the particular rules and policies that are implicated by the use of AI tools in preparing and prosecuting applications, including:

(1) The Duty of Candor and Good Faith

(2) Signature Requirement and Corresponding Certifications

(3) Confidentiality of Information

(4) Foreign Filing License and Export Regulations

(5) USPTO Electronic Systems' Policies

(6) Duties owed to Clients, including competent and diligent representation of a client.

The Guidance then discusses the use of AI for (A) Preparing documents for filing with the USPTO, (B) Filing documents with the USPTO (C) Accessing USPTO IT Systems, (D) Confidentiality and National Security considerations, and (E) Fraud and Misconduct.

Human Governance

In order for patent practitioners to maintain their ethical obligations with the use of AI, they must oversee and sign off on the output of the tools. This theme is clear across the first two areas of AI use that the article covers – preparing and filing documents with the USPTO.

Papers filed with the USPTO must be signed by the party or parties presenting the paper. If an AI system was used to draft or edit a document, the practitioner signing the document is still responsible for the contents of that document, and must ensure that all statements "are true to their own knowledge and made based on information that is believed to be true." The guidance makes clear that practitioners must review the outputs of the AI to make sure that all the information is accurate. For example, if AI assists with finding prior art and preparing an IDS, the practitioner must review the IDS and remove "clearly irrelevant and marginally pertinent cumulative information."

Additionally, the Guidance states that a practitioner must also ensure that there are no important omissions from the papers. While the USPTO does not require that a practitioner disclose if a paper was prepared with assistance from AI, there may be instances where disclosure of AI is required. One example that is used repeatedly in the Guidance is the use of an AI tool to draft claims. "[I]f an AI system is used to draft patent claims that are submitted for examination, but an individual listed in 37 CFR 1.56(c) has knowledge that one or more of the claims did not have a significant contribution by a human inventor, that information must be disclosed to the USPTO."

Technical Mitigations

The Guidance also discusses the importance of providing technical safeguards on this new technology to prevent potential future unauthorized access or breaches of confidentiality.

The Guidance makes clear that the user of an AI tool has the responsibility for any negative consequences of the use of AI, even for the more technical aspects of the tool. Users must ensure that the tool does not retain confidential information that might be used to train the AI tool or disclose the information to third parties. Additionally, if an AI tool utilizes servers that are located outside the United States, this could implicate national security, export control, and foreign filing license issues.

By ensuring the proper technical mitigations are in place, practitioners can reap the benefits of these tools without risking implicating any of the issues outlined above.

Conclusion

It is encouraging to see the agency whose mission is to drive innovation embrace the use of one of the most innovative – and powerful – technologies of the current age. The Guidance attempts to strike a balance between the risks and rewards of this new technology not by creating new rules, but rather by clarifying how the use of AI is impacted by the rules and policies that are already in place.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.