AI is rapidly developing and with its broad range of applications, ignoring its growth could carry a risk of setbacks to your legal profession. Although AI has come a long way, there are still big challenges yet to overcome in order for the technology to completely develop.

Transcript

David Lowe: Today I'm going to be talking to Matt Hervey a Director in our Intellectual Property team about artificial intelligence (AI). What is it and why is it important to General Counsel?

So Matt we read a lot about artificial intelligence, you know Google is going to take over the world and the parts of the world it doesn't take over is going to be taken over by autonomous vehicles but why if you're a busy in-house lawyer, why does it matter today?

Matt Hervey: I think it's because of the range of applications for AI. So it's everything from consumer queries or helpdesks or medical research or promoting products and services, it really goes across the board to all sectors and if you as a company haven't invested and your competitors do they may get the edge and then you will be forced to engage. But a more general level even if you want to get your message out in the internet age it's being filtered through AI as it is so to get your results up the tree of Google search results you need to understand AI, to get your products onto E-commerce such as Amazon you need to understand AI and more importantly if you get into AI there a whole raft of legal issues now.

David: So if you ignore it you do so at your peril and therefore you need to engage now to understand it.

Matt: Exactly and also it's going to have impacts on the legal profession, so we as lawyers may or may not survive as we know it once AI comes in to help with due diligence and contract reviewing and the like and those who understand the way things are going will help develop their role as they like and will be better placed to remain in employment.

David: So let's start there. What's it going to mean for legal services? You've touched on a few areas but can you give me a few examples?

Matt: The biggest application is where you have a pile of documents and you have limited time or limited money and you want to know roughly what's in them. So it can't give you any definitive or absolute results but it can guide you as to what is a priority. So the long-term application has been in litigation for discovery so I may be landed with ten million documents and I agree with the other side hopefully with the blessing of the Court to use an AI such as predictive coding, where you train the AI over time to start coding the way you would code as a lawyer. It may not get it right all the time but in the grand scheme of things it's close enough for litigation. Now that's been rolled out now into contract review and to due diligence, so again where you have a lot of work flow to deal with, particularly due diligence, it will try to highlight key things like termination clauses or the length of a contract automatically and you just accept there is some risk it won't get it right. But I've even seen products particularly in the States which are saying that an in-house lawyer can set up an AI to actually approve contracts without the involvement of a lawyer if it is below certain thresholds of risk.

David: So does that mean lawyers are going to become extinct?

Matt: I don't think so this is ultimately a question of economics and people are always debating whether new technologies will lead to mass unemployment or vast social upheaval and certainly there are those in Silicon Valley who really think this is coming. There is a lot of talk of a universal wage which will be paid to entire populations to deal with the future when there's just not enough work to be done and there's what's called apocalypse insurance where wealthy Silicon Valley plutocrats are bulk buying islands and the like to retreat to with armed guards in case of mass social unrest. But the history of technological change to date has always been that it just creates new and different types of jobs and doesn't create mass unemployment.

David: So let's turn now from legal services to more general, the application of artificial intelligence generally. What kind of legal issues does that present?

Matt: It is a hot topic I'll put it that way, so the White House brought out a report last year, the European Parliament published a report on autonomous robots in January this year and the UK Government has brought out a report on specifically self-driving cars. It's a real issue because these things may cause physical harm to humans now, so if I have a two tonne car driving itself and it's deciding whether or not to stop at a zebra crossing this is of real interest to regulators...

David

...and to everyone else.

Matt: Indeed and I think the hottest topics there are who would be liable when things go wrong and you have to remember these are machines which make their own decisions and which theoretically can learn on the fly and start behaving in ways which hadn't been predicted.

David: So we can't predict, they're not learning in the way we learn necessarily?

Matt: There's a big difference between two forms of AI, one is called an expert system and in an expert system you sit down with an expert such as yourself and you say well how would you approach this problem and then a programmer tries to drill down into what you do and create decision trees that a computer will follow and try to reproduce what you do and you and I could look at that decision tree and understand it. But the other kind is machine learning and that's what almost all news reports are really about and machine learning is totally alien to our way of thinking. You need a set of training data such as a lot of pictures with labels about what they show. So this is a picture of a cat, this is a picture of a pony and the like and you just drop them into this algorithm, this learning algorithm and it develops its own model of how to match the inputs of pictures to the desired outputs the labels cat, pony etc. But to do so it just creates a big pile of numbers which no one can understand.

David: Presumably that could bring its own dangers then?

Matt: If it does something unpredictable, it's very hard to roll back and understand why it did what it did. Regulators are trying to improve our rights when it comes to decisions made automatically without human invention. So the GDPR due to come into force in May 2018 gives you the right to know if a significant decision such as access to a loan or a hiring decision has been made automatically and if it has then you have a right to have it reviewed. Now what some commentators are suggesting is that people should have the right to have the actual machine learning decision explained, but currently the technology can't do that.

David: I guess that presents all kinds of issues, I was reading in New Scientist talking about how some of these algorithms are producing racist outcomes, not because they are being trained to be racist but because of the way they digest lots of information and just presume that's how the world is.

Matt: I think you're probably talking about a chatbot that Microsoft brought out in 2016 called Tay. Tay was supposed to tweet in the style of a 19 year old American woman but lasted 16 hours before she started tweeting really hate crime, racist, sexist material denying the Holocaust, that sort of thing and the really interesting point there is that was unintended behaviour.

Now I can give you examples of intended behaviour where AI does wrong. So for example in 2014 a Swiss art group developed a robot which made random purchases on the so-called Darknet and they gave it a weekly allowance of $100 worth of Bitcoin and off it went and it bought ecstasy pills, counterfeit goods, a Hungarian passport before the whole lot, the robot and the purchases, were confiscated by the Police.

David: So there is some distance to go then for it being totally intelligent then because those sound like some big challenges to come.

Matt: Really the big problem with machine learning, as opposed to expert systems, is whereas an expert system is supposed to have your world view, it's supposed to accept the facts that you accept as facts, the basis for how you reach your decisions. Machine learning just has no general knowledge and has no common sense whatsoever and it's only trained to do one thing and that's the only thing it can do. To give you a wonderful example, in 2001 DARPA sponsored the first self-driving competition and among the entrants was a self-driving vehicle built on a 15 tonne military truck and this truck leaves the start line and approaches a piece of tumbleweed. Now you or I with any common sense, any human on earth would know they could drive over it. The 15 tonne truck doesn't know this and decides to reverse. Meanwhile it's sensors realise that another piece of tumbleweed has rolled behind it and it's now stuck for the rest of the race.

David: Now you're an IP lawyer ultimately Matt, so what about the IP issues presented by AI?

Matt: New techniques in AI are potentially patentable so that's wonderful, but the really interesting point is AI is increasingly being used, at least in research to generate poetry, to generate art, to generate new news reportage that sort of thing and should these be copyright works and there's no standard approach around the world. So to give you two examples in the UK actually computer generated works had been copyright for years, but in the US the officer's advice there is that a machine that works automatically cannot produce a copyright work.

David: So AI must present some cyber security issues?

Matt: So cyber security is an issue for any IT application, but particularly when you're talking about an autonomous robot and particularly self-driving car or a care robot and that's really got the European Parliament twitching. To take self-driving cars as an example, a manufacturer has to make sure that every single component in a complex vehicle is secure and these components come from many suppliers and even if you get through all the hard layer and all of the software secure these cars then still rely on external data, so they rely on GPS, so location data and they are planned to rely on data shared by other cars, so where another car is, what it's speed is and the like. If that data can be faked then the car can be tricked and if the car can be tricked we have real dangers, the car can be stolen automatically, it could be caused to crash or it could even take someone hostage. So the UK Government has recently published principles for cyber security, as it applies to autonomous vehicles, and they're serious principles. To give you a flavour it says this must be led from the board, it's a board level issue cyber security, and frankly unlike a lot of operating systems one might mention another principle is that you have to guarantee security or work on security for the lifetime of the product.

David: Yeah so it doesn't just become obsolete after a couple of years.

Matt: Well in theory what we may see is that private ownership will go away anyway. So actually manufacturers will be removing obsolete technology and that won't in fact be a practical problem.

David: Fascinating. Thank you very much for that.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.