In the July 7, 2017, "Artificial Intelligence" issue of Science, we were intrigued by a short piece in the "Insights" section on "Artificial Intelligence in Research" that discussed the future use of autonomous robots in surgery. Surgeonless surgery would "allow[] work around the clock with higher productivity, accuracy, and efficiency as well as shorter hospital stays and faster recovery." Science, at 28. The listed drawbacks were: "technical difficulties in the midst of a surgery," the "loss of relevance of surgeons," and "how to equip artificial intelligence with tools to handle . . . inherent moral responsibility." Id.

Fascinating. In addition to driverless cars, do we also need to contemplate surgeonless surgery? We've long been aware of the advent of robots as an adjunct to surgery. Bexis filed a (largely unsuccessful) PLAC amicus brief in Taylor v. Intuitive Surgical, Inc., 389 P.3d 517 (Wash. 2017), but the surgical robot in Taylor in no way threatened to displace the surgeon, and the applicability (if not application) of the learned intermediary rule in Taylor was undisputed. Id. at 526-28.

We checked the Internet, and sure enough there were plenty of articles from reputable sources:

"Completely automated robotic surgery: on the horizon?" (Reuters)

"Autonomous Robot Surgeon Bests Humans in World First" (Inst. of Electrical & Electronics Engineers)

"Would you let a robot perform your surgery by itself?" (CNN)

"The Future Of Robotic Surgery" (Forbes)

Science fiction? Apparently not anymore. As the last article stated:

Having totally automated procedures was once a thing of science fiction, very futuristic and not very practical. . . . But over the last three or four years, technology has evolved and this has become a possibility. I think potentially we'll see some automated tasks in the medical field in the next five years.

All these articles are from 2016.

Since we'll still be practicing law in five years, we thought we'd better start thinking about this.

First, will there be product liability litigation involving autonomous surgical robots at all? Existing surgical robots appear to have been "cleared" by the FDA, Taylor, 389 P.3d at 520, so there hasn't been much of a preemption barrier to bringing suit. We're not FDA regulatory specialists, but we have some doubt about how a fully autonomous surgical robot – described as something out of "science fiction" in the articles – could be marketed as "substantially equivalent" to existing devices. If autonomous surgical robots, or the software that runs them, must go through FDA pre-market approval, then they would be protected by preemption, subject only to "parallel claims" that the manufacturer somehow violated relevant FDA regulations. We are assuming, perhaps incorrectly, the continuity of the current preemption regime for medical devices.

Second, what happens to the learned intermediary rule where the product itself – an autonomous surgical robot – stands in the shoes of the traditional learned intermediary? Plaintiffs would, of course, give the same answer as always: Abolish the rule as outdated. We disagree. Any consideration of the jurisprudential reasons for the learned intermediary rule, discussed here, suggests just the opposite. The rule exists because patients can't be expected to understand for themselves the complexities of prescription medical products, so the law demands that the scientific and technological information necessary to make intelligent use of these products be provided to trained, professional "learned intermediaries," who are then expected to counsel their patients about individualized treatment decisions.

Does this rationale apply to autonomous surgical robots? Absolutely. These products will be some of the most advanced and complex medical technology yet produced, and the law cannot expect their manufacturers simply to provide patients with the instructions for use, tell them to "have at it" and make up their own minds. More than ever, patients will need medical professionals to explain the risks, benefits, and alternatives of automated surgery. Who, then, becomes the learned intermediary when the traditional role of the surgeon is performed by a "product" in a potential legal action? Looking to the purposes of the learned intermediary rule, our answer, at this point, is whichever physician whose legal duty it is to conduct the informed consent discussion with the patient. The learned intermediary rule exists in large part to ensure that the doctor who will be advising the patient has adequate information to do so. The professional standard that the medical community ultimately adopts to handle informed consent in automated surgery is its own business. But however the medical community resolves that issue, the duty of the robot manufacturer should be the same as ever: to provide information about the product adequate to enable the learned intermediary to evaluate that information, along with the patient's medical history, in order to make proper treatment decisions and to explain these decisions to the patient.

Third, what will the advent of autonomous surgical robots do to the legal distinction between "services" and "product sales" that has traditionally protected health care providers – including hospitals − from strict liability? We don't know. The answer probably depends on how the medical community integrates these robots into the health care system generally. If robotic surgery is carried out under the close supervision of medical professionals, then probably not much will change in terms of the sales/services distinction. That has been the case with currently available robot-assisted surgery. See Moll v. Intuitive Surgical, Inc., 2014 WL 1389652, at *4 (E.D. La. April 1, 2014) (robot use did not remove surgical claim from scope of malpractice statute).

However, if cost consciousness leads to "routine" automated surgery being conducted with only technicians on hand to ensure that the robots are functioning properly, then the entire exercise starts to look more like the use of a product than the provision of medical services. Once again, it will be up to the medical community to develop its standards of care for the use of autonomous surgical robots. If necessary, the law will adapt.

A number of sources of potential liability associated with automated surgery, such as failure to detect an unexpected cancer,or a non-robot-related intra-operative complication (like an adverse reaction to anesthesia) would appear to implicate medical malpractice theories of liability (e.g. "lost chance") rather than product liability. How will courts handle claims at the intersection of medical malpractice and product liability − that, however good the robotic software is at its intended surgical use, it does not allow the robot to react to the unexpected like human surgeons can?

Fourth, in terms of product liability, what's the "product?" Here, we mean whether the software, including the MRIs, CAT scans and other patient imaging data, is considered something separate from the physical robot itself. Is the software purchased, or provided, separately from the hardware that is the visible robot? This distinction could make a big difference in available theories of liability. It could also be important in determining component part liability in cases where the hardware and software manufacturers point fingers at one another. In such cases, possible defendants include healthcare professionals, hospitals that maintain the robots, manufacturers of robotic hardware, and providers of software – both the software that runs the robot and patient-specific electronic scans. As now, there is also the possibility that the patient may not follow proper instructions. Will autonomous surgical robots be required to have aviation-style "black boxes" to provide post-accident information?

The prevailing view under current law has been that software is not a "product." "Courts have yet to extend products liability theories to bad software, computer viruses, or web sites with inadequate security or defective design." James A. Henderson, "Tort vs. Technology: Accommodating Disruptive Innovation," 47 Ariz. St. L.J. 1145, 1165-66 n.135 (2015). The current restatement defines a "product" as "tangible personal property." Restatement (Third) of Torts, Products Liability §19(a) (1998). In a variety of contexts, software has not been considered "tangible." See 2005 UCC Revisions to §§2-105(1), 9-102; Uniform Computer Information Transactions Act §102(a)(33) (NCCUSL 2002); ClearCorrect Operating, LLC v. ITC, 810 F.3d 1283, 1290-94 (Fed. Cir. 2015); United States v. Aleynikov, 676 F.3d 71, 76-77 (2d Cir. 2012); Wilson v. Midway Games, Inc., 198 F. Supp.2d 167, 173 (D. Conn. 2002) (product liability case); Sanders v. Acclaim Entertainment, Inc., 188 F. Supp.2d 1264, 1278-79 (D. Colo. 2002) (product liability case). However, a couple of cases have gone the other way. Winter v. G.P. Putnam's Sons, 938 F.2d 1033, 1036 (9th Cir. 1991) (dictum in case involving books); Corley v. Stryker Corp., 2014 WL 3375596 at *3-4 (Mag. W.D. La. May 27, 2014), adopted, 2014 WL 3125990 (W.D. La. July 3, 2014). Also of possible note, a legally non-binding 2016 FDA draft guidance considers software to be a "medical device" subject to FDA regulation in situations that would probably include autonomous surgery.

The availability – or not – of strict liability could be a big deal in cases alleging injuries arising from fully automated surgery performed by autonomous surgical robots. What caused the injury? Was there a problem with the robot's hardware (such as a blade or needle malfunction)? Was the robot incorrectly maintained? These issues would not implicate the robot's software. On the other hand, was there a defect in the surgical software's algorithms (that is, a design defect)? Was the software designed properly but somehow corrupted (that is, a manufacturing defect), or hacked (intervening cause). Or, to introduce a different defendant, was there some sort of error in the electronic patient-imaging files that told the robot how to operate on this particular patient?

In strict liability, a "product" defect is the key element of liability (as is a "good" for warranty claims). A product malfunction, in the absence of reasonable secondary causes, in many jurisdictions can establish a jury submissible case. In negligence, the plaintiff must also prove breach of duty, and an accident is not generally considered probative of such a breach. Res ipsa loquitur – the negligence version of circumstantial proof of defect – is almost unheard-of in the context of medical treatment. If there is a "product," then strict liability is available. If there isn't a "product," the plaintiff is obliged to prove negligence. This distinction can be important, given how difficult proof of defect is likely be. Cf. Pohly v. Intuitive Surgical, Inc., 2017 WL 900760, at *2-3 (N.D. Cal. March 7, 2017) (rejecting theory that invisible "microcracks" caused burns during robot-assisted surgery).

These are the issues that jump out at us as we consider the possibility of autonomous surgical robots for the first time. There are undoubtedly others. The technological possibilities are amazing. As defense lawyers, it is our job to ensure that these possibilities are realized, and are not put out of reach by excessive liability.

This article is presented for informational purposes only and is not intended to constitute legal advice.