Highlights

  • The Federal Trade Commission (FTC) held a virtual FTC Tech Summit focused on artificial intelligence (AI) and AI's impact on competition and consumers on Jan. 25, 2024.
  • The Tech Summit began with the announcement that the FTC launched a market inquiry into the investments and partnerships being formed between AI developers and five major cloud service providers.
  • The FTC wants businesses to appreciate what's at stake when deploying AI technologies and recognize that the failure to understand underlying AI algorithms is not an excuse to escape liability.

The Federal Trade Commission (FTC) held a virtual FTC Tech Summit focused on artificial intelligence (AI) and AI's impact on competition and consumers on Jan. 25, 2024. The half-day event featured a diverse set of speakers from the FTC, including Chair Lina Khan, Director of Competition Henry Liu, Director of Consumer Protection Sam Levine and Commissioners Rebecca Slaughter and Alvaro Bedoya. The Summit also included speakers from the Consumer Financial Protection Bureau (CFPB), journalists, entrepreneurs and academics. Organized by the FTC's Office of Technology, which was created less than a year ago, the Tech Summit began with introductory remarks from Chair Khan announcing the FTC's launch of a market inquiry into the investments and partnerships being formed between AI developers and five major cloud service providers.

FTC Inquiry into Generative AI Investments and Partnerships

The FTC's inquiry seeks to "scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape." As Generative AI (GenAI) is rapidly evolving and has the potential to revolutionize numerous markets, the FTC is concerned that unfair competition could skew the rate and direction of such innovation. This initiative is not isolated and represents the FTC's recent efforts to establish legal benchmarks for AI competitive markets and consumer protections.

The FTC seeks to prevent the potential consolidation of GenAI market power into the hands of a few dominant companies. In the FTC's view, open and competitive markets pave the way for emerging technologies, like generative AI, to realize their maximum potential. However, the FTC is concerned that some large companies, particularly those with a stake or exerting control over some crucial GenAI inputs, may attempt to monopolize critical processes and restrict competitors' access to other essential products. Dominant players may be more likely to acquire emerging rivals rather than attempting to outperform them by developing superior products or services. Such scenarios, in the FTC's view, would result in these dominant players leveraging their market control to manipulate and restrain competition in the GenAI market.

Notably, the FTC is already investigating a number of companies over allegations that their products violated consumer protection laws by placing personal data at risk. However, the FTC's inquiry makes clear that it is not only the application of GenAI that requires investigation, but also that market power consolidation by a handful of AI leaders deserves government scrutiny.

As issues surrounding GenAI continue to develop, the FTC is poised to use several tools to identify and address unfair methods of competition. Presently, the FTC is applying its investigative power under Section 6(b) of the FTC Act, which allows the FTC to request information from a company without a specific law-enforcement purpose.

The FTC is seeking information from five companies exhibiting:

  • specific investments or partnerships, including the rationale of an investment or partnership
  • the practical implications of the partnerships or investment
  • analysis of the competitive impact, including information related to market share, competition, potential for sales growth or expansion into product or geographic markets
  • competition for AI resources, including the competitive dynamics regarding key products and services needed for GenAI

The companies have 45 days to reply to the FTC's inquiry.

FTC Tech Summit

The FTC's market inquiry announcement is representative of the discussions that took place throughout the Tech Summit, which consisted of three panels, each moderated by the FTC. The Tech Summit focused on how market concentration impacts consumers and competition. The panel discussions provided insight into the key areas of concern for the FTC and illuminated issues that the industry should watch closely, namely 1) the lack of competition among chip developers and service providers, 2) deceptive and ambiguous marketing, particularly as it relates to "AI Safety" and "AI Privacy" and 3) the possibility of a move away from copyright law toward data and privacy law to regulate AI.

Panel 1: AI & Chips and Cloud Infrastructure

Alex Gaynor, the deputy chief technologist for the FTC, moderated the first panel focused on the lack of competition among chip developers and cloud service providers and the resulting impact on innovation and consumers. There was a particular emphasis on the bar to entry for chip startups. The panel discussed how startups must compete with already established chip developers and major cloud providers who have started developing their own chips. Panelists shared their concern that this concentrated power will allow cloud service providers to preference their own vertically integrated business lines, increase prices and decrease quality. It was also noted that when providers develop their own chips, they gain unprecedented surveillance access because they obtain the ability to analyze the chip's memory.

Panelists explained that the lack of chip developers is due to market forces. The startup costs involved in chip production require investors, and investors are hesitant to back startups that must compete with some of the world's largest companies. The panel also discussed the barriers consumers face to switch providers and the difficulty of splitting their business among multiple providers. The discussion returned often to the way dominant firms control the market, which may allow them to charge excessive prices or extortive terms and hamper innovation.

Panel 2: AI & Data and Models

The second panel was moderated by Krisha Cerilli, deputy assistant director of the FTC's Technology Enforcement Division. This panel similarly focused on competition, the ability of startups to enter the AI market and ways to protect consumers. The panel discussed how early-stage investors look for startups with "talent" that can be brought to market quickly and how this creates a funding gap for more research-oriented startups. The panel also discussed the increased restrictions on publicly available data and how investors are hesitant to fund startups that directly compete with established AI companies. The panel warned that the dominant firms' advantage from the last decade of commercial surveillance, coupled with their ability to make these data sets more robust, may create new forms of power asymmetries.

There was also a focus on how best to protect consumers. The panel suggested that copyright law is insufficient to protect consumers because the consumer rights at issue are more than just property rights. Some panelists argued that since dominant firms have the capital, consumers are without redress, because firms can lawfully purchase the copyright. These panelists instead suggested turning to data and privacy law to protect consumers. It was noted that a coalition exists in support of a new federal privacy law that extends further than AI. It was also suggested that any new federal privacy law should include a private right of action. The panel emphasized the need for data minimization and more integrated regulatory solutions that integrate consumer protection and competition. At large, the panel emphasized that data collection has consumer protection and privacy implications.

Panel 3: AI & Consumer Applications

The third panel was moderated by Andy Hasty, an FTC attorney. This panel focused on consumer trust and deceptive marketing, the breadth and magnitude of potential harms, and the need for more upfront risk mitigation. As to consumer trust, the panel noted that consumers are excited about AI and that there is a demand to integrate AI tools into everyday life, for example, through educational activities for children. However, the panel also emphasized the risk GenAI poses to consumers, particularly the lack of transparency. The panel expressed concern with deceptive and ambiguous marketing and obfuscation surrounding AI. The panel was particularly concerned by the use of the terms "safety" and "privacy." For example, the term "AI safety" originally referenced techniques to prevent rogue AI. In the public domain, however, consumers understand safety as unlikely to cause danger. Similarly, the panel noted a labeling problem regarding the term "AI privacy." Within the AI community, privacy means to understand the source of the data, where it is stored and what companies can do with it. In practice, consumers have trouble discerning where and what data is stored and how their data is used.

The discussion also delineated between present and existential harms. Several panelists urged industry and government to focus on the known and concrete risks of AI, such as discriminatory decision-making. They argued that the focus on abstract and uncertain harms is a distraction that prevents simple solutions to known problems. To that end, the panel discussed more upfront risk mitigation to prevent harm. There was also a discussion of shifting the burden for transparency from the consumer, who now has the burden to investigate, to the company to make its policies clear. Panelist Atur Desai, the CFPB Deputy Chief Technologist for Law & Strategy, noted that breaking the law should never be to a company's competitive advantage. Deputy Chief Desai emphasized that existing financial laws apply to AI and that if a company cannot comply with federal consumer protection laws, then the company should not be using that AI technology.

Takeaways

The key takeaway from the FTC's AI Tech Summit and its launch of a market inquiry into generative AI investments and partnerships is that the FTC will be highly active in regulating AI. In the Tech Summit's opening remarks, Chair Khan noted that the trajectory of AI is not inevitable; rather, its future will be a direct result of the policy choices made today. Chair Khan spoke at length of the FTC's missteps in the Web 2.0 era and emphasized that policymakers across the government are eager to learn from those missteps as they navigate the challenges of AI.

The AI Tech Summit made clear that the FTC will not "wait and see" and is prepared to use the full scope of its authority to regulate AI and ensure that dominant firms do not exert undue influence or gain privileged access in ways that may undermine fair competition. When planning for the future, the FTC has historically used its Section 6(b) investigative power to develop future agency positions or enforcement priorities. The FTC expressed that it wants to address the current market consolidation before the problem is too entrenched and noted the ways that limited access to key inputs, such as computer processing power and chips, hampers AI competition. Finally, the Commission expressed that it wants businesses to appreciate what's at stake when deploying AI technologies and recognize that the failure to understand underlying AI algorithms is not an excuse to escape liability. The burden is on businesses to ask the hard questions about how AI systems work and ensure that their AI systems are fair.

Best Practices

Companies should ensure that their use of AI technologies complies with all financial and consumer protection laws. Moreover, companies should heed the FTC's warning that an inadequate understanding of AI technology is not a safeguard. To stay ahead of this issue, businesses must ensure that they have a full understanding of their AI partnerships and investments, including the practical implications and competitive impact. As a proactive measure, businesses should:

  • ensure that their use of AI technology complies with all financial and consumer protection laws
  • ensure that they understand how the AI technology they employ works
  • initiate an internal accounting of their current AI practices

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.