In response to the rapid growth of generative artificial intelligence (generative AI or GAI), several federal government agencies have announced initiatives related to the use of artificial intelligence (AI) and automated systems and efforts to minimize the potential threats stemming from the misuse of this powerful technology. As the use of AI becomes integrated into our daily lives and employee work routines, and companies begin to leverage such technology in their solutions provided to the government, it is important to understand the developing federal government compliance infrastructure and the potential risks stemming from the misuse of AI and automated systems.

This BRIEFING PAPER will cover some of these agency initiatives and some of the broader issues with the use of GAI. Some of these issues are specific to doing business with the government and others relate to all companies. As many employees are experimenting with AI in connection with their work, it is important for companies to set some guard rails to avoid unwanted legal issues. Many companies are developing a corporate policy on employee use of AI. This PAPER will discuss why companies need one and what they should include.

Federal Government Initiatives

Federal government agencies have announced initiatives that seek to leverage their collective authorities to monitor the development and use of AI and automated systems. On April 21, 2023, the Secretary of Homeland Security, Alejandro N. Mayorkas, announced a new initiative that seeks to combat evolving threats, including the revolution created by GAI.1 The Secretary announced the first-ever Department of Homeland Security (DHS) AI Task Force, which will drive specific applications of AI to advance critical homeland security missions including:

  • Integrating AI to enhance the integrity of supply chains and the broader trade environment, such as deploying AI to improve screening of cargo and identifying the importation of goods produced with forced labor;
  • Leveraging AI to counter the flow of fentanyl into the United States by improving detection of fentanyl shipments, identifying and interdicting the flow of precursor chemicals worldwide, and targeting for disruption key nodes in the criminal networks;
  • Applying AI to digital forensic tools to improve identification, location, and rescue of victims of online child sexual exploitation and apprehend the perpetrators of this heinous crime; and
  • Collaborating with government, industry, and academia partners to assess the impact of AI on DHS's ability to secure critical infrastructure.2

Additionally, on April 25, 2023, officials from the Federal Trade Commission (FTC), the Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the U.S. Equal Employment Opportunity Commission (EEOC) released a joint statement on "Enforcement Efforts Against Discrimination and Bias in Automated Systems."3 The joint statement outlines each respective agencies' commitment to enforce their respective legal and regulatory authority to ensure responsible innovation in the AI space.4 As described below, the agencies are taking a broad interpretation of the term "automated systems" for the purposes of their respective efforts:

Today, the use of automated systems, including those sometimes marketed as "artificial intelligence" or "AI," is becoming increasingly common in our daily lives. We use the term "automated systems" broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions. Private and public entities use these systems to make critical decisions that impact individuals' rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services. These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.5

To view the full article click here

Footnotes

1. Department of Homeland Security, Secretary Mayorkas Announces New Measures to Tackle A.I., PRC Challenges at First State of Homeland Security Address (Apr. 21, 2023), https://www.dhs.gov/news/2023/04/21/secretary-mayorkas-announces-new-measures-tackle-ai-prc-challenges-first-state.

2. Id.

3. Federal Trade Commission, Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (Apr. 25, 2023), EEOC-CRT-FTC-CFPB-AI-JointStatement(final).pdf.

4. Id. at 2–3.

5. Id. at 1.

Originally Published by Thomson Reuters

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.