As breakthroughs in artificial intelligence and related technologies gallop forward at what feels like a frenzied pace, lawmakers in Congress and officials from a wide range of federal regulatory agencies are increasingly focused on enacting policies governing these disruptive innovations – seeking to seize the initiative while often being forced to play catch-up.

President Trump announced the American AI Initiative, “a concerted effort to promote and protect national AI technology and innovation,” and in February of this year signed an executive order, “Maintaining American Leadership in Artificial Intelligence.” As federal agencies develop policies to address the growing role of AI and related technologies on issues and programs under their jurisdiction, the Administration has projected nearly $1 billion in additional non-defense R&D investment in networking and IT, according to a supplement to the president’s FY 2020 budget request issued in September, although it is unclear how much of this supplemental investment is new spending.

The National Institute of Standards and Technology was tasked by the same Executive Order in February to develop “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” In doing so, NIST has sought to establish US leadership by setting standards that will impact the development of all future global AI technology.

At the same time, the White House, through its Office of Information and Regulatory Affairs, is expected to soon release guidance to federal agencies about regulating specific applications of AI. In September, the Chief Technology Officer of the United States, Michael Kratsios, proclaimed, “This will be the first document that has legal force around the way that agencies should be looking at regulating artificial intelligence technologies, I think it will set the tone globally on the way that we can be pro-innovation while also protecting American safety.”

Bipartisan AI caucuses have been launched in the US Senate and House to better coordinate policymaking decisions on Capitol Hill, even as lawmakers scramble, sometimes on an ad hoc basis, to draft legislation that mirrors the degree to which AI has already permeated virtually every facet of 21st century life.

As the annual appropriations process to fund government agencies moves along, it seems that practically every spending measure contains at least some passing reference to AI or machine learning (ML). The promise that facial and voice recognition technologies offer for effective law enforcement and public safety, and their potential ethical perils and privacy implications, are now well established as Congressional priorities and preoccupations. AI’s role in how we live and work is an increasingly frequent subject of informational and oversight hearings as members of Congress seek to educate the public, and themselves, about these new realities.

In addition to staying on top of rapid advances in science and engineering within the industry, senior US officials also worry that the US needs to keep pace with other countries, both friends and adversaries, that are investing in R&D and training their work forces to use AI-enabled devices.

In every issue of AI Outlook, we will review the latest developments around AI in Washington and discuss what these bills and trends mean for your business. Please enjoy.

Tony Samp

Policy Advisor

Steven R. Phillips

Co-Chair, Federal Law and Policy

THE LEGISLATIVE OUTLOOK IN CONGRESS

In the first six months of the 116th Congress, more than 20 separate pieces of legislation were introduced related to AI, machine learning and facial recognition. Concerns expressed by some in the private sector, coupled with congressional hearings regarding regulation or outright bans of certain AI applications like facial recognition or autonomous vehicles, potentially pose a significant challenge for the technology as a whole and the ability of policy makers to support further AI development.

Reflecting elected officials’ recognition of the need for a comprehensive and coordinated AI strategy, Senator Martin Heinrich (D-NM), founding co-chair of the Senate AI Caucus, has introduced the Artificial Intelligence Initiative Act (AI-IA, S 1558). Senator Rob Portman (R-OH), co-chair of the AI Caucus, is the lead co-sponsor of the bipartisan initiative. The legislation would authorize a combined federal investment of $2.2 billion “to accelerate research and development on artificial intelligence for the economic and national security of the United States,” as stated in the bill’s title.

Under the proposed legislation, now pending before the Senate Committee on Commerce, Science, and Transportation, the President would be required to establish the National AI R&D Initiative to coordinate between the federal government, the private sector, NGOs and institutions of higher education. The Office of Science and Technology Policy would be mandated to establish an Interagency Committee on AI to develop five-year strategic plans, and a National AI Coordination Office. The bill calls on the National Science Foundation to create a National AI Advisory Committee to provide defense and non-defense AI expertise to the Coordination Office, establish a research and education program on AI and engineering, and award grants for the establishment of up to five multidisciplinary research and education centers. Responsibility for developing measurements and standards would fall to the National Institute of Standards and Technology (NIST), while the Department of Energy (DOE) would conduct an AI R&D program and provide grants to establish up to five AI research centers.

The specific investments in the proposed legislation include the following:

  • $2.2 billion total over five years
    • $1.5 billion – Department of Energy selects up to five institutions of higher learning and national laboratories to serve as AI R&D Centers (these centers could partner with private industry)
      • $60 million per year per AI R&D Center ($60 million x 5 centers = up to $300 million annually; $300 million x 5 years = $1.5 billion total over five years)
      • Purpose: ensure AI researchers and educators in academia and the private sector have access to state-of-the-art computing resources for making scientific discoveries and advanced research and technology transfer.
  • $500 million – National Science Foundation selects up to five institutions of higher learning to serve as AI Education and Research Centers (these centers could partner with private industry; at least one center must have K-12 education as its primary focus; one center must be a minority serving institution)
    • $20 million per year per AI Education and Research Center ($20 million x 5 centers = up to $100 million annually; $100 million x 5 years = $500 million total over five years)
    • Purpose: study algorithm accountability, explainability, data bias, privacy as well as societal and ethical implications of AI,
  • $200 million – National Institute of Standards and Technology (NIST)
    • $40 million per year to develop standards and metrics on cybersecurity; algorithm accountability; algorithm explainability; algorithm trustworthiness with stakeholders on AI.

A bill introduced in the House by Representative Daniel Lipinski (D-IL), the Growing Artificial Intelligence Through Research (GrAITR) Act (HR 2202), is closely related, though not identical, to the Senate proposal. It would create a strategic plan to invest $1.6 billion over ten years in US research, development, and application of AI across the private sector, academia and government agencies, including NIST, the National Science Foundation (NSF), and DOE. While praising the high degree of expertise and creativity among US scientists and engineers in this emerging field, Lipinski lamented in an op-ed for The Hill shortly after the president’s executive order that “We’ve got a great team, but we don’t yet have a plan.” The legislation is pending before the House Committee on Science, Space, and Technology, of which Lipinski is a senior member.

While those more comprehensive proposals await consideration, the Senate has taken action on legislation specifically targeting one of the shadier implications of AI/ML: deepfakes, the AI-based technique that can manipulate video images in often malicious and deceptive ways. The Deepfake Report Act of 2019 (S 2065), sponsored by Senators Portman, Heinrich, and seven other senators from both sides of the aisle, passed the Senate on October 25 by unanimous consent. The legislation requires the Department of Homeland Security to assess the technology used to generate deepfakes, their uses by foreign and domestic entities, and available countermeasures, to help policymakers and the public better understand the threats deepfakes pose to election security and national security. The bill will now head to the U.S. House of Representatives, where very similar legislation with a slightly different title the Deepfakes Report Act of 2019 (H.R.3600) has been introduced by Representative Derek Kilmer (D-WA) and is awaiting consideration by the House Energy and Commerce Committee.

A number of other legislative approaches to meet the challenge of implementing effective public policy in the AI space have been introduced in the current session of Congress, including the following:

The Algorithmic Accountability Act (S 1108, HR 2231), sponsored by Senator Ron Wyden (D-OR) and Representative Yvette Clarke (D-NY), respectively, requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions. In announcing the legislation, Sen. Wyden said that “instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color. Our bill requires companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find.”

In an effort to remedy inaccurate or unfair AI systems, the legislation would:

  • Authorize the Federal Trade Commission (FTC) to create regulations requiring companies under its jurisdiction to conduct impact assessments of highly sensitive automated decision systems. This requirement would apply both to new and existing systems.
  • Require companies to assess their use of automated decision systems, including training data, for impacts on accuracy, fairness, bias, discrimination, privacy and security.
  • Require companies to evaluate how their information systems protect the privacy and security of consumers’ personal information.
  • Require companies to correct any issues they discover during the impact assessments.

The sponsors noted that the obligations in the bill only apply to companies that are already regulated by the FTC and generate more than $50 million per year in profits. Data brokers or companies that have data on more than 1 million consumers or consumer devices are covered, however, regardless of their revenue. The legislation has been endorsed by Data for Black Lives, the Center on Privacy and Technology at Georgetown Law, and the National Hispanic Media Coalition.

The potential impact of this legislation on the tech industry is significant. Additionally, companies in a wide range of sectors that use AI systems from banks and insurance companies to retailers and other consumer businesses are likely to be affected should the measure become law. Besides concerns about the compliance burdens that the proposal would create, critics of the legislation say that the measure includes overly broad definitions of high-risk automated decision systems. “The Algorithmic Accountability Act, if implemented as written, would create overreaching regulations that would still not protect consumers against many potential algorithmic harms while also inhibiting benign and beneficial applications of algorithms,” the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, wrote in a September 23 article titled “How to Fix the Algorithmic Accountability Act.”

While the measure has thus far attracted only Democratic co-sponsors in the House and Senate and may not become law in its current form, policymakers can be expected to continue to focus on the potential for AI bias and to look for public policy solutions. Companies and other organizations that develop or use AI systems are well advised to be vigilant about the potential for biased decision-making and to take whatever steps they can to prevent these problems or to promptly address them if they arise.

The AI in Government Act (S 1363, HR 2575), sponsored respectively by Senator Brian Schatz (D-HI) and Representative Jerry McNerney (D-CA), co-chair of the House AI Caucus, would improve the use of AI across the federal government by providing resources and directing federal agencies to include AI in data-related planning. Both bills have bipartisan co-sponsorship in their respective chambers.

The Future Defense Artificial Intelligence Technology Assessment (Future DATA) Act (HR 2432), sponsored by Representative Neal Dunn (R-FL), mandates the issuance of a report to Congress by the Secretary of Defense, in consultation with the Joint Artificial Intelligence Center, on the Pentagon’s AI strategy. A provision incorporating this legislation was included in the version of the National Defense Authorization Act (NDAA) that passed the House earlier this year.

The Artificial Intelligence Job Opportunities Act of 2019 (AI JOBS Act) (HR 827), sponsored by Representative Darren Soto (D-FL) with bipartisan co-sponsorship, would require the Secretary of Labor to submit to Congress a report on the impact of AI on the workforce, in collaboration with a broad range of stakeholders in education, industry, the service sector, national laboratories and other federal agencies.

A resolution has been introduced in the House, Supporting the development of guidelines for ethical development of artificial intelligence (H Res 153), sponsored by Representative Brenda Lawrence (D-MI), providing for the development of guidelines for the ethical development of AI. The bill has been endorsed by the Future of Life Institute, BSA | The Software Alliance, IBM, Facebook and Adobe.

The Commercial Facial Recognition Privacy Act (S 847), sponsored by Senator Roy Blunt (R-MO), would strengthen consumer protections by prohibiting commercial users of facial recognition technology from collecting and re-sharing data for identifying or tracking consumers without their consent. Microsoft and the Center for Democracy & Technology have expressed support for the goals of the bipartisan measure.

The Armed Forces Digital Advantage Act (S 1471), sponsored by Senators Heinrich and Portman, would require the Under Secretary of Defense for Personnel and Readiness to develop and implement a policy to promote and maintain digital competencies such as AI and ML. Most of the provisions of the bill have been incorporated into the Senate-passed version of the NDAA.

The foregoing is not an exhaustive list. As noted above, AI-related provisions are being attached to a broad range of legislative proposals. While the individual bills cited above have not yet advanced in committee, the AI issue is clearly gaining traction in Congress, and provisions from some of these initiatives could find their way into larger appropriations or authorization legislation. We will continue to closely monitor developments in the field of AI public policymaking and provide regular updates on what decision makers in the nation’s capital are doing to address the challenges and opportunities of the AI revolution.

Click here to see tables listing the members of the Senate and House AI caucuses.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.