This article took way too long to write. The researching, the outlining, the drafting, the editing. The brooding. For months I toiled and wondered whether this was a good idea. And while I stewed, I worried that someone else would beat me to it.

Writing can be exhausting, but writing legal briefs can be downright painful. As the late Justice Scalia once confessed, "I don't enjoy writing, but I enjoy having written."1 My words exactly. So in the truest American tradition, I dream of the day when I can take the most painful parts of writing and automate them.

But is that even possible? Could a machine ever learn the art of brief-writing? And if so, isn't that kind of artificial intelligence generations away?

What if the basic technology existed today? What if artificially drafted briefs were already on the horizon?

I bring news.

In 2016, director Oscar Sharp released a nine-minute science-fiction film called Sunspring.2 The opening credits — glitchy machine text sputtering in and out over an ominous industrial hum — identify the screenwriter: a computer.

Sunspring was an experiment inspired by the technology you carry in your pocket every day. Smartphones have been helping us compose text for years by reading the words we type, predicting what our next word will be, and making suggestions. Mr. Sharp and his crew used similar technology to create Sunspring's screenplay.

Here's how it went. First, the team "trained" a software program named Benjamin.3 They fed Benjamin about 150 science-fiction screenplays, which it digested and analyzed. Then they gave Benjamin a starting point by providing a title, some sample dialog, and a few action words.

And then they pushed the button. Out came the script in all its glory:

H
In a future with mass unemployment, young people are forced to sell blood. That's the first thing I can do.

H2
You should see the boys and shut up. I was the one who was going to be a hundred years old.

H
I saw him again. The way you were sent to me... that was a big honest idea. I am not a bright light.

C
Well, I have to go to the skull. I don't know.4

This magnificent nonsense persists to the end.

Make no mistake, Sunspring is enjoyable — but definitely not because of its gimmicky screenwriting. Ultimately, its entertainment value is the product of old-fashioned human sweat. The way the actors deliver their lines, their facial expressions, the sound effects, the cinematography — they all lift you above the verbal fog. You may not know why, but you can just tell that something heavy is unfolding. It's the way a foreign film grabs you even though you don't understand a word. The cast and crew took gibberish and made it work.

The algorithm Benjamin used — known as a "long short-term memory recurrent neural network" — doesn't write colloquy. It just crunches data. To generate Sunspring, Benjamin dissected the 150 screenplays and determined how often individual words appeared in proximity to one another. Then, starting with the seed words Mr. Sharp's team provided, Benjamin began assembling the dialog. Every time it chose a word, it asked itself a question: "Given everything I've seen so far, what is the next word most likely to be?" It then picked the next word and repeated the loop. And out came a script.

Kubrick this is not. There is no underlying idea being conveyed, no point to the dialog at all. It's really an exercise in statistics — slightly better than a random walk through the dictionary. There's no direction, no subtlety, no appreciation for ambiguity or context, and no intentional humor. This is math. But a good cast and crew made something of it.

So is this the best that artificial intelligence can do? Will AI's work product always require heroic intervention just to make sense? Most writers would scoff at the idea that computers will ever compete with human writers — and this goes double for us brief-writers. As we see it, our work puts heavy demands on all facets of the human intellect. It requires subject-matter knowledge, logic, persuasion, brevity, style, empathy, and many other traits that seem all too elusive. Plenty of lawyers can't pull this off. Surely no soulless machine could ever do it.

Well. As it turns out, computerized writing has developed much further than we lawyers seem to realize. And while it's often difficult to separate reality from hype, there's no doubt that AI's former limitations in written composition and persuasion are steadily disappearing. This article (which again took far too long to write) is my attempt to identify some common beliefs about AI-based writing and to show how technology is disproving them.

"AI can't generate decent prose." As enjoyable as Sunspring is, its dialog is a baffling word salad. You might conclude from this that machines, not being sentient, are simply incapable of writing coherent text consistently. But that conclusion would be wrong. AI is now generating prose from scratch — and it's not only coherent, it's virtually indistinguishable from human writing. In fact, the odds are that you've read some and don't even know it.

Consider Narrative Science, a Chicago company that generates online news stories for clients like Forbes.com. Writing for >Wired.com, Steven Levy interviewed the folks at Narrative Science several years ago and got some insights into how their system works.5 First, the machine ingests structured information — say, baseball statistics or financial data. It then applies a set of rules and templates to get some idea of what this data means. For example, does a particular baseball play enhance the odds of victory (thus increasing its importance)? Does a sudden rise in stock price foreshadow a business breakthrough? Once the system identifies this core idea, it generates a story in plain English.

As with Sunspring, this plain-English narrative derives from an existing body of written work. But unlike Sunspring, it makes sense. Here's an example:

Analysts expect higher profit for DTE Energy when the company reports its second quarter results on Friday, July 24, 2015. The consensus estimate is calling for profit of 84 cents a share, reflecting a rise from 73 cents per share a year ago.

The consensus estimate hasn't changed over the past month, but it's down from three months ago when it was $1.10. For the fiscal year, analysts are projecting earnings of $4.61 per share. Analysts project revenue to fall 7% year-over-year to $2.52 billion for the quarter, after being $2.70 billion a year ago. For the year, revenue is projected to come in at $11.23 billion.6

This may not win a Pulitzer, but still. It's understandable, it's organized, and it tells a coherent story. And best of all, assuming it's accurate, it requires no real editing.

The sports stories are even better, perhaps because the subject isn't so dry. Here's an example produced by Wordsmith, a robo-journalism program created by a North Carolina tech company called Automated Insights:

Marcus Paige scored with nine seconds remaining in the game to give North Carolina a 72–71 lead over Louisville. The Heels held on to a win by that same score following a missed 3-pointer by Wayne Blackshear and an unsuccessful second-chance attempt by Terry Rozier.

The Paige basket capped off a 13-point comeback for the Tar Heels, who trailed 63–50 after a Blackshear 3-pointer with 8:43 left in the game. UNC finished the game on a 22–8 run to secure the victory. After a basket by Brice Johnson gave North Carolina a 70–69 lead with 39 seconds left, Rozier responded with a hoop to give Louisville a one-point advantage with 26 seconds remaining.7

In the short time since these pieces were published, computer writing has only gotten better. Just before I finished this article, Elon Musk's OpenAI announced that its AI platform can now generate fictitious "news" stories based on bare-bones prompts. Give the system a single sentence — or even just a fragment — and away it goes. Although the ensuing stories are entirely made up, they are so well-written that some observers bemoan how credible they seem.8

Why this massive improvement over Sunspring? One big reason is structure. According to Levy, news stories tend to follow a formula. The programmers and overseers can create a framework that follows this recipe, and can develop a vocabulary that includes jargon familiar to sports fans and finance buffs. Sunspring involved no such structure; it was just an exercise in linking words together like paper clips, based purely on probability.

Good legal briefs also follow a predictable structure. They recite the rule of law, summarize the salient facts, and argue that those facts dictate a particular conclusion. They may also go on to refute counterarguments. This can be far more complex than reporting on a basketball game, but not always. Simple issues do exist in the law, such as quashing a subpoena that on its face violates an express procedural rule, or dismissing a claim barred by a statute of limitations. For these straightforward matters, AI may already be capable of generating basic briefing. As for more complex issues, coherent computer writing is just a matter of time; as Narrative Science, Wordsmith, and OpenAI show, the proof of concept already exists.

The bottom line, contrary to what many of us might believe, is that AI is already capable of generating prose that is decent, if not downright enjoyable.

"Generating formulaic prose from numbers is one thing. But AI can't reliably analyze unstructured human language and generate anything worthwhile." The programs that write sports and financial stories have the benefit of highly structured data, usually numerical. Methods for parsing such data have been around for centuries, so it's no big surprise that computers can extract meaning from it.

Unlike statistics, however, human language is inherently "fuzzy" and cannot realistically be collapsed into a set of rules to be hard-coded into software or amassed into data tables. Surely a machine can't reduce statutes, case law, and pleadings to something it can reliably analyze.

Taming linguistic information is certainly more challenging than crunching numbers. But again, AI has made tremendous progress in extracting meaning from natural language. Perhaps the most prominent example is IBM Watson, which in 2011 famously drubbed its human competitors on Jeopardy! When presented with questions — usually ambiguous ones designed to trick the contestants — Watson's roomful of servers kicked in with a complex array of operations. After deciphering the English-language question (known as a "clue" in Jeopardy! parlance), Watson identified possible answers by querying some 200 million pages of locally stored information, applying hundreds of algorithms to the information it retrieved, and assigning confidence scores to each possible answer. As long as the best answer exceeded a certain confidence threshold, Watson would then buzz in and announce its answer in plain English (in the form of a question, no less). And it often did all of this faster than its human counterparts could think.9

The thing about the Watson of Jeopardy! is that, much like Sunspring, it required a ton of human intervention. It reportedly took three years for some 20 researchers to train the system to play Jeopardy! on a par with humans.10 Along the way, many of Watson's practice answers were comically wrong. This implies that the machine was just following complex rules imposed by its handlers and refined over many iterations.

This poses a problem for lawyers, since the governing rules are constantly morphing. But what if the machine could train itself to understand human language, and update its output accordingly?

Thanks to recent advances in "machine learning," such self-training systems are widely available. Look no further than Google Translate. Before November 2016, Google's translations depended on a massive array of data tables, rules, and exceptions, all curated by a community of users who offered a steady stream of improvements. While generally successful, this brute-force system had little regard for context or subtlety, and often yielded translations that made about as much sense as the dialog in Sunspring.

But in November 2016, Google flipped a switch. No longer did users receive primitive translations based on discrete crowdsourced rules. Now, the machine trained itself. As before, it digested a mountain of information (of which Google has plenty). But this time, it recognized linguistic patterns on its own and wrote its own rules. The result has been a substantially better product.11

You and I don't have to know how all of this works. All we really need to know is that it does. AI can now navigate the linguistic labyrinth, and the conceptual obstacles that machines once faced in dealing with unstructured information have been largely overcome. Thanks to machine-learning algorithms, computers can now mimic us with near fluency.

"The data required to train these systems is prohibitively expensive." This is a good point, but ultimately a fleeting one. It's true that Westlaw, LEXIS, and PACER are not cheap, and few could afford the volume of data that Google used to develop and hone Google Translate.

But this informational oligopoly is already fading as low-cost alternatives proliferate. Sites like Justia.com, Google Scholar, RECAP, and many others now provide free or low-cost access to huge compilations of legal authorities and filings by litigants. Common experience says that this trend will only continue. Costs will drop, free databases will mushroom, and data accumulation will cease to pose a roadblock to anyone interested in harnessing AI to write legal briefs.

"Computers can't persuade." Most of us believe that, to persuade humans, one must be human. And that viewpoint makes sense: Persuasion requires not only logic, but such enigmatic qualities as empathy, an ability to identify (and capitalize on) biases, and a basic grasp of human nature. Given that so many humans lack these traits, how can we ever expect machines to possess them?

As it turns out, they don't have to. Once again, let's consider Watson. In June 2018, IBM researchers from Israel traveled to San Francisco to showcase a new product known as Project Debater. The event pitted IBM's machine against two human debaters — not just any debaters, but two of Israel's best. Using the typical opening/rebuttal/conclusion format, the participants debated the merits of telemedicine and space-exploration subsidies. All of this happened in real time: The machine listened to the arguments of its human counterparts, dissected those arguments, and tapped into a mountain of data to formulate and audibly express plain-English responses. It even cracked jokes.12

But did it persuade? It appears so. These debates happened in front of a human audience, who evaluated the arguments and opined on which debater had the more convincing position. IBM is cagey about who "won" the event, but the machine did manage to change a few minds. It persuaded, even in the face of highly capable opponents. Some in the media have already suggested that lawyers can use Project Debater's technology to identify and develop legal arguments that best advance their clients' interests.13

"Computers will never think like lawyers, so they can't possibly write like lawyers." As currently conceived, AI machines cannot think like humans. In fact, they cannot think at all — they are non-sentient appliances. In terms of conscious thought, they're no more advanced than a hammer.

But there's no reason why a machine must think like a human to succeed in law. Father-and-son futurists Richard and Daniel Susskind have written extensively about the mistaken notion that computers must emulate their human creators in order to do what humans do. This "AI fallacy," as they call it, wrongly assumes that there is only one way to approach complex intellectual tasks.14 Even Google's programmers apparently grappled with this fallacy, and ultimately vanquished it, when they reinvented Google Translate. As Google's Greg Corrado explained, "It's not about what a machine 'knows' or 'understands' but what it 'does' . . . ."15

In the same way, a computer need not heed Professor Kingsfield to do a lawyer's work. As long as the black box spits out a good brief, it doesn't matter how it got there, or whether it "understood" what it was doing. All that matters is the end product. Thinking like a lawyer is irrelevant.

"You've identified a lot of separate systems that must be cobbled together, and that's hard." You've got me there. But the point is that these things do exist, even if only in nascent form, and they can be cobbled together. Whoever does this in a cost-effective manner will bring a product to market. And once such products gain a foothold, the brief-writing profession will begin a relentless evolution.

"None of this will happen during my career." That remains to be seen. But even those nearing retirement shouldn't be too complacent. Technological advances have a way of sneaking up on you. To paraphrase Hemingway, advances happen gradually, and then suddenly.

Author Ray Kurzweil has predicted that, by 2050, "one thousand dollars of computing will exceed the processing power of all human brains on earth."16 Debate this all you want, but the core idea stands: Advances in computing technology are inevitable. And many are still unforeseen. As New York Times writer Gideon Lewis-Kraus observed in his report on Google Translate, "[w]hat [Google] Brain did [with Translate] over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines."17 Why should lawyers be exempt?

Not everyone shares this vision of a relentless technological march. Some experts see theoretical brick walls that AI will never overcome, and believe that the current environment is awash with excessive hype.18 Perhaps. But keep in mind that the applications explored in this article already exist. Time will tell how capable they can become, how complex their work can get, and whether they can conquer their current limitations.

Regardless, the door has been opened. With time, AI's capabilities and availability will increase while its costs decrease, all at an accelerating pace. As clients see the potential for AI to generate workable drafts, they will insist that their lawyers either adopt the technology or at least cut their fees to keep up with their mechanized competition. And as AI starts to outperform junior attorneys (whose raw work often requires substantial revisions and rewrites), lawyers will capitulate to their clients' new demands. True, we're not there yet. But it's just a matter of time.

So are brief-writers doomed? I can't answer this million-dollar question with any authority; I'm just reporting what I see on the horizon. So I won't pretend to predict the future or weigh AI's social costs and benefits. Lots of smart people have already written about these topics, among them University of Tennessee law professor Ben Barton19 and Richard and Daniel Susskind20 from the UK. I can't equal their insightful work.

Still, for whatever it's worth, here's my personal and speculative take. Brief-writing is hard and time-consuming. (Did I mention how long it took me to write this article?) I welcome the day when a computer can do the heavy lifting required to generate a first draft. When that happens, we lawyers will be able to write more briefs — and arguably better ones — at a lower cost and without all the Scaliaesque pain. We'll spend more time polishing good machine-generated drafts, and less time squinting at blank pages. Appellate practitioners like me will benefit if lower costs inspire more litigants to try their luck appealing adverse judgments. In such a world, automation would be liberating and profitable.

Even if AI gets to the point where it can generate polished drafts — or even final ones — I still don't think we're doomed. The law is dynamic, and new issues and arguments emerge all the time. It remains to be seen whether machines will ever be able to generate new and appealing ideas in a shifting landscape, from scratch, without human guidance. Then again, as Professor Barton reminded me, any gap in skills will probably narrow as the machines observe (and learn from) our attempts to fine-tune their work.

Regardless, the practice of law is highly regulated, and is limited to those who can obtain a license and adhere to ethical rules. Computers, which are perfectly capable of making grievous mistakes, lack accountability. As long as this is true, AI will require human supervision as a matter of law. Professional intervention by licensed attorneys will be required, if for no other reason than to protect the public. So it's difficult to imagine a day when our genuine human touch will be completely redundant.

It will be fascinating (and a little unnerving) to watch as AI machines learn to write. It will also be interesting to see how we lawyers grapple with this emerging technology. Somehow, I think we'll manage.

Footnotes

1 This quote comes from Bryan Garner's famous Supreme Court interview series (visited Apr. 6, 2019).

2 Sunspring is available on YouTube (visited Apr. 6, 2019).

3 Wikipedia contributors, Sunspring (2019), Wikipedia, the Free Encyclopedia (visited Apr. 6, 2019).

4 Alex Brannan, An In-Depth Analysis of Sunspring (2016), the Short Film Written by a Computer (2016), CineFiles Movie Reviews (visited Apr. 6, 2019).

5 Steven Levy, Can an Algorithm Write a Better News Story Than a Human Reporter? (2012), Wired.com (visited Apr. 6, 2019).

6 Narrative Science, DTE Energy Earnings Projected to Increase (2015), Forbes.com (visited Apr. 6, 2019).

7 Stephen Beckett, Robo-journalism: How a computer describes a sports match (2015), BBC.com (visited Apr. 6, 2019).

8 Will Knight, An AI that writes convincing prose risks mass-producing fake news (2019), MIT Technology Review (visited Apr. 6, 2019).

9 Jo Best, IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next (2013), TechRepublic (visited Apr. 6, 2019).

10 Best, supra.

11 For an in-depth look at how Google Translate harnessed machine learning to bring its translations to the next level, see Gideon Lewis-Kraus, The Great A.I. Awakening (2016), The New York Times Magazine (visited Apr. 6, 2019).

12 Videos about this event abound on YouTube. Three short examples are available on IBM Research, CNET, and Fox Business (all visited Apr. 6, 2019).

13 Billy Duberstein, IBM's Debater: Your Next Lawyer? (2018), The Motley Fool (visited Apr. 6, 2019).

14 See Richard Susskind and Daniel Susskind, The Future of the Professions (2015), Oxford University Press.

15 Lewis-Kraus, supra.

16 Susskind and Susskind at 157.

17 Lewis-Kraus, supra.

18 Thomas Nield, Is Deep Learning Already Hitting its Limitations? And Is Another AI Winter Coming? (2019), Towards Data Science (visited Apr. 6, 2019).

19 Benjamin H. Barton and Stephanos Bibas, Rebooting Justice (2017), Encounter Books; Benjamin H. Barton, Glass Half Full (2015), Oxford University Press.

20 Susskind and Susskind, supra; Richard Susskind, Tomorrow's Lawyers (2013), Oxford University Press.

Previously published in Towards Data Science.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.