Citing the proliferation of online bots used to deceive consumers and influence voters, the California legislature recently passed the nation's first law directly regulating online bots. Enacted on September 28, 2018, SB 1001 prohibits use of online bots in a deceptive or misleading manner for certain commercial or political purposes. The law will become operative on July 1, 2019 and could be emulated by legislators in other jurisdictions who have voiced concerns about online bots in the wake of the 2016 election.

Overview

The law defines a "bot" as "an automated online account where all or substantially all of the actions or posts of that account are not the result of a person."

The law makes it unlawful "to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication" if the communications aims to:

  • "incentivize a purchase or sale of goods or services in a commercial transaction" or
  • "influence a vote in an election."

While the law does not require disclosure that a bot is not human, an organization that uses a bot can avoid liability under the law altogether if it does and is the disclosure is "clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot."

Giving this disclosure, however, is not a safe harbor from liability under other laws, as SB 1001 provides that it is "cumulative with any other duties or obligation imposed by any other law." For example, an organization that gives the disclosure may escape liability under the anti-bot law, but still face liability under California's Unfair Competition Law (UCL) if the conduct nonetheless constitutes an unlawful, fraudulent or unfair business practice.

The law does not specify its own internal enforcement mechanism, but California's Attorney General (and potentially, California district attorneys and local prosecutors) can seek civil penalties of up to $2,500 per violation of the law under the UCL. In addition, private plaintiffs may attempt to use violations of the law as a predicate for private lawsuits authorized under the UCL for unlawful, fraudulent or unfair business practices.

Our Take

SB 1001 will give organizations that use bots in a wide variety of commercial and sociopolitical contexts a strong incentive to be transparent that the bots are not human.

Even if a bot's primary purpose is not to sell a product or service or influence a voter, organizations will still need to consider whether the bot's secondary effects might trigger the law. For instance, a bot primarily designed for customer service that even gently encourages up-sales or subscription renewals could be covered by the law. Similarly, activity that may influence a vote in an election is not limited to direct advocacy for a candidate or ballot measure. For example, the Russians charged by the Department of Justice with attempting to influence the 2016 U.S election used bots to post on social media not only about candidates but also about race, immigration and other controversial social issues – an event cited by SB 1001's sponsors as part of the impetus for the law. Thus, organizations will need to consider whether their bot activity could have even indirect influence on purchase or voting decisions, and weigh the risk of alleged SB 1001 violations against any drawbacks of disclosing the bot's artificial identity.

The bar to establishing a violation may seem high given that the organization must not only have "intent to mislead" the individual about the bot's "artificial identity" but also do so for the purpose of "knowingly deceiving" the individual about the nature of the content. However, the ability of bots to persuasively impersonate humans is now well-established and precisely why they have become so popular. As a result, establishing intent to impersonate a human may be as simple as providing evidence that a company intentionally used a bot that reasonably could be mistaken for a human without disclosing that it wasn't human. Thus, for many organizations using bots the question will become whether to rely on the position that bot communications are not knowingly deceptive or simply disclose that the bot is not human.

Next Steps

Organizations should first review the types of communications that their bot makes – or is likely to make – and assess the risk of the bot delivering misleading or deceptive communications, especially in contexts where they could be characterized as influential (even indirectly) on a decision to vote or purchase a product or service. Organizations should consider implementing technical and organizational controls (e.g., algorithmic controls, testing procedures, periodic formal reviews and risk assessments) to prevent their bots from making such communications. Taking these steps, and documenting them, can be useful in defending against allegations of knowing deception, and becomes more essential if the organization elects not to disclose that the bot is not human. Finally, organizations should consider giving a clear and conspicuous notice disclosing the bot's artificial identity where feasible, as it is the most straightforward way to avoid liability under the law.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.