This article first appeared in TL4 FIRE Starters | Essay Competition issue in February 2024

"Of all the dystopian futures I considered, one where machines make the art and humans do the hard labour is not the one I wanted".

This sentiment and ones like it often circulate on the more cynical sides of platforms like X (formerly Twitter), Reddit and Discord. Whilst at first blush they seem a snarky jibe at technology, reminiscent of 1950s (and sadly modern) 'they're taking our jobs'-style racism, such statements in fact provide the opportunity to consider the nuanced split between the human aspect and that conventionally relegated to the machine.

For almost three centuries since the start of the industrial revolution, machines including computers have been seen as tools, to be deployed in the pursuit of simplifying the human's role and easing her burden. Now, for perhaps the first time ever, we are confronted on a mass scale with a completely different type of machine: one that can make art, or write poetry, or even dream (in the words of Refik Anadol, data artist behind Unsupervised, the incredible AI-driven art installation currently taking the Museum of Modern Art by storm).

Naysayers will argue that AI and machine learning will bring about the downfall of independent thought; proponents say that this is the start of a new beginning of seamless interaction between man and machine.

Where is the truth? To give a real lawyer's answer: it depends, but probably somewhere in the middle.

The role of the FIRE lawyer

The FIRE lawyer is somewhat unique in their chosen speciality: fraud, and therefore the asset recovery and insolvency matters arising out of it, is almost limitless in its subject matter. The massive scope for novelty in the world today is the breeding ground for new schemes – one only has to look at the facts underlying any of the hundreds of cryptocurrency scam matters pending before Courts worldwide to see that fraudsters will jump on any opportunity to turn the latest buzzwords into a method by which to separate victims from their assets.

This requires lawyers and other professionals working in the FIRE space to be equally as adaptable. The old adage that 'no two cases are ever alike' is well borne out in this practice area, and each case will have its own unique technical and legal challenges with which practitioners must grapple. In addition, fraud cases often have the added complexity of at least one party doing their endeavour best to obfuscate or alter the true narrative.

Against this backdrop, the thought of a 'silver bullet' tool to solve myriad problems of research, recollection, document processing and many others seems hard to resist.

The tangible benefits of embracing AI as a tool

There is no doubt that the advent of readily accessible AI tools such as Generative Pre-trained Transformer or GPT-based language models has the potential to add real value to a lawyer's practice. A simple prompt of "how can AI benefit a fraud lawyer's practice?" returns reams of material from ChatGPT addressing how AI can improve document review speeds, pick up patterns in financial transactions or text that may otherwise have been missed, detect anomalies in behavioural biometrics and a host of other benefits.

These go far beyond the usual "save time and therefore money" answer in the specialist legal media, which in any event is often met with a mix of scepticism and hostility from an industry still largely beholden to the billable hour.

AI document review is probably the most well-established benefit of embracing AI at present, having been rolled out by commercial discovery providers as an add-on to the virtual data room and document review software with which many lawyers are already familiar. However, more and more sophisticated uses of AI are now coming to market and will undoubtedly continue to do so in the future.

Traditional legal research database companies are pledging to plough millions into developing and refining AI research assistants, virtual paralegals and other resources for lawyers across all practices.

Take, by way of example, a document review population of many thousands of documents. Two emails sent by the same individual directly contradict one another, a point of material significance in the ongoing investigation. With a human review team of old, this contradiction might never be picked up: in all likelihood, two different individuals would review the two different emails, each of which seems innocuous in isolation. Even if the same reviewer considered both, one could come across their desk days or weeks after the other: the reviewer likely would have forgotten, in the fog of consecutive 10-hour review days, about the precise wording of the first document. Even if not, and the reviewer felt that familiar tingle somewhere deep in the recesses of their memory that this document didn't quite match up with something else they had seen, the prospects of the reviewer successfully identifying the original document out of the thousands of other documents passing across their screen are slim.

On the other hand, an AI reviewer is never tired, or absent-minded, or distracted. It never forgets, and can in a split second identify the discrepancy, and flag the precise documents for further consideration. This is a simplistic example, but just one way in which AI and machine learning are undoubtedly adding real value to the conduct of matters, and not only cutting down on document review fees.

The equally tangible risks of abandoning independent thinking

Having waxed lyrical about the benefits of embracing AI, it is only fair that the discussion now turns to some of the pitfalls of AI's use in the FIRE practitioner's practice. Ironically, most of the reported AI horror-stories are in fact not failings of the technology at all; rather, they are failings of the people trying to use the technology.

Consider the well-reported case of a hapless lawyer who cited completely fictitious cases in argument before a tribunal, because ChatGPT invented the cases.

At its most basic, ChatGPT (and all generative language models like it) is an exercise in statistics: which word is, in a particular context, statistically most likely to appear after the one before it? ChatGPT does not claim to give you true answers, just answers, and by inventing the authorities that it did, it accomplished its sole goal: it answered the lawyer's question, with no qualms about the veracity or otherwise of that answer. The true failing in that instance lay with the lawyer who did not verify the answers that were given.

In similar vein, AI-based outcome predictors trained using historical judicial data have received media scrutiny over their supposed bias in determining guilt in criminal proceedings. However, the algorithm or neural network underlying the predictions is not the one with the bias: sadly, it outputs results based on whatever it is 'taught', meaning that the incoming information is where the bias actually lies.

These two examples would both be capable of remedy by having a human controller or user of the AI technology concerned, exercise their judgment in relation to the results. This is the true shortcoming of the forms of AI technology currently available for use by the FIRE practitioner, and is likely to be a shortcoming of AI technology generally for quite some time. While AI may far outperform humans on an IQ test, on any metric of judgment or EQ it falls (at this stage) far short.

The indefinable human element in the practise of law

In this measure of judgment and logic and ethics, lies the true distinction between mankind and machine (for the moment, anyway). AI does not have ethics, or a moral code by which it conducts itself. It is a series of increasingly complex logical prompts aimed at securing a specified outcome, whether that be providing an answer, truthful or not, to a stated question in the case of ChatGPT or creating a momentary never-before-seen modern artwork in the case of Unsupervised.

This distinction is where the true value, the art rather than the science of practising law, lies. On a basic level, lawyers are typically remunerated for the hours and minutes they spend on specified tasks, but the whole is greater than the sum of its parts: the glue between those tasks, the overall strategy and the judgment calls that go into determining it and adapting it, is where FIRE practitioners' real skills are found. There is truth in an argument that a virtual paralegal may in the near future do a better job of writing a letter or drafting a pleading than an average human lawyer, but that is a reductionist view of a lawyer's job. Especially in the FIRE space, there is unquantifiable value in the human instincts and intuition of a lawyer.

Take for example the unsuspecting accessory to a fraudulent scheme: any number of AI tools will generate reams of questions with which to cross-examine them, pointing out inaccuracies in their testimony in real time and logging corroborating questions for future witnesses. These are indisputably useful endeavours.

But what of the human instinct to tread softly with the older gentleman who, while technically a director of the relevant entity, has just as much had the wool pulled over his eyes as the true victims of the other directors' fraud?

Or the judgment call to extend an olive branch to a wavering witness in without prejudice correspondence, recognising that the value of their ongoing support and information would far exceed anything to be gained by subpoena or summons to be cross-examined? These are factors that are uniquely human, because they are ultimately questions of nuance and ethics.

The art should never be lost

The quotation at the start of this essay is a wry one, but there is truth in it in the realm of the FIRE lawyer. As AI tools grow and develop, care must be taken to ensure that the push to showcase the latest and best does not upend the relationship between artificial intelligence and human intelligence. AI is an incredibly useful tool, now almost guaranteed to revolutionise the way in which lawyers work in all sectors, but it is not, and in my submission can never be, a replacement for the human art of truly excellent legal practice.

AI's lack of an intrinsic moral or ethical code means that, whatever technological wizardry is developed in future, there remains a role – perhaps a niche one, but a role nonetheless – for the skilled and ethical lawyer. After all, law is a social science rather than a hard science, and it is that softening element of art and morality, the indefinable consideration that goes into every decision as a lawyer, that makes it impossible to dispense with the human completely.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.