The recent deaths of a pedestrian and the driver of a Tesla vehicle have raised further questions about the technological limitations of the current generation of 'autonomous' vehicles, and where liability should lie.

We previously identified these problem areas, predicting that collisions involving autonomous vehicles will increase in 2018 and also highlighting the disparity between the marketing materials of manufacturers and the current reality of the capabilities of autonomous vehicles.

The responses to the collisions by various stakeholders demonstrate continuing difficulties in relation to data ownership and sharing, which will be vital in determining liability following an accident.

Uber self-driving taxi death

On 19 March 2018, an Uber autonomous vehicle collided with and killed a pedestrian who was crossing the road whilst pushing her bike. It is understood that the sensors failed to pick up the presence of the pedestrian in the dark.

Video footage released of the interior of the vehicle showed the safety driver was not concentrating on the road ahead. However, the vehicle was not slowing down at the time of the collision, and gave no audio or visual warning of the obstruction ahead.

Venodyne Linar, creators of the sensors used on the vehicle, denied their technology was at fault, advising they worked equally well in the dark and highlighting that Uber's autonomous system processes the sensor data and makes the decisions. Nvidia, makers of the graphics processing unit, also distanced their product from the incident, placing the blame firmly on Uber.

Investigations into the cause of the collision are ongoing and make take some time given the complex web of stakeholders.

Uber have moved quickly to agree a settlement with the family of the deceased pedestrian, with no details of whether liability was admitted or the terms of any financial package agreed.

Uber has also halted its tests in all North American cities, and were suspended from testing their vehicles on the roads in Arizona.

Death of Tesla vehicle owner

On 23 March, a Tesla Model X vehicle collided with a roadside barrier in California, before catching fire, resulting in the death of the driver.

Tesla's statement in response, criticised by the US National Transportation Safety Board, suggested that the Autopilot function was activated during the crash, and that "the driver had received several visual and one audible hands-on warning earlier in the drive."

Tesla's statement set out that the driver had "about five seconds and 150 meters of unobstructed view" of the roadside, yet took no evasive action.

It is unclear whether or not the driver was not paying attention in the seconds prior to the collision, or whether previous successful journeys on this stretch of road had prompted an over reliance on the Autopilot functions and the dismissal of in-car warnings.

This is the latest in a line of incidents involving Tesla vehicles.

Earlier this year, a Level 2 Tesla travelling at an estimated 65 mph vehicle collided with a stationary fire truck on a freeway. The driver reported the vehicle was on 'autopilot'. In another incident, a man was arrested for drunk driving in California, having been found asleep at the wheel of his stationary Tesla in the middle lane of a freeway. He advised officers that his vehicle was engaged in autopilot.

What can we learn?

  • Within the confines of the current level of sophistication of autonomous vehicles, contact with pedestrians is seemingly problematic. The experience so far is that self-driving cars are seemingly a way off being able to coexist harmlessly with humans on local roads. There remains a chasm between allowing a human to put a car into autopilot on a straight motorway and allowing it to navigate city streets onto which people and animals may wander. This raises the question of whether autonomous vehicles need to be tested on specific tracks and highways in the short term, until they can deal with such contingencies, or else suffer the consequences.
  • Whilst Toyota stated that they would suspend on-road testing of their autonomous vehicles for a brief period, General Motors and Ford both stated that the collisions would not affect their testing operations.
  • The move by Tesla to release technical data highlighting apparent inaction of the driver to visual and audible warnings of an obstruction was a clear act of PR damage limitation. The statements of Nvidia and Venodyne Linar following the Uber collision took the same route.It is understandable companies wish to protect brand reputation in a fledging industry, yet there is an urgent need to prevent the aftermath of collisions involving autonomous vehicles descending into claim and counter-claim by interested parties.
  • To this end, and if a driverless future is to be achieved, there is a clear need for the development of an international standardised data sharing and usage agreement. Issues of data sharing will be fundamental to the developing legal liability position globally.
  • Combined with the inevitability of some users overestimating the functions of their vehicles prior to the release of Level 4 and 5 vehicles, a further steady stream of future incidents must be expected, meaning that the provision of information to the relevant agencies is vital.
  • Such a data sharing agreement would allow prompt investigation of a collision involving autonomous vehicles. During a recent discussion regarding the Autonomous and Electrics Vehicles Bill, the junior transport minister, Baroness Sugg explained the lack of commentary within the Bill for the data generated by autonomous vehicles stating that, "It is likely that these data recorders will be regulated on an international basis... it would be against UK interests to act unilaterally before decisions have been taken."
  • Last year, global data protection watchdogs called for autonomous vehicle users to be given control over who accesses the data generated by their vehicles. The resolution passed advocated privacy controls allowing users to withhold access to different categories of data generated by these vehicles.
  • The Automated Driving Insurance Group of the ABI proposed that data should "be recorded in the event of a collision and made available on an equal basis to both manufacturer and insurer such that questions of status of automated systems, extent of driver input and liability can be quickly and impartially assessed."
  • It is unclear however at this stage what actions will be taken on an international level to ensure that this regulation is in place.
  • The prompt settlement of any potential claim by Uber has prevented a lengthy discussion on the issue of liability in that instance, but liability disputes will occur. From a UK perspective, the Automated and Electrics Vehicles Bill provides that the insurer of the vehicle will be liable for any collision, and then the insurer will be expected to recover against any secondary parties.
  • In principle, this sounds straightforward;

    • If an insurer is required to pay first party losses, they pursue a subrogated action against the responsible manufacturer of the defective equipment;
    • if the insurer settles a third party claim, then wishes to pursue a recovery action against any secondary parties, they pursue a claim under the Civil Liability (Contribution) Act 1978;
  • However, based on the current situation, this is not as easy as it sounds. Without access to all the relevant data from the vehicle, the sensors and other parts, and also statements from the driver, a motor insurer may well be in the dark as to what caused the collision.
  • A simple subrogated recovery action where a human driver is using a standard vehicle, becomes a complex liability dispute once a certain level of automation is introduced into the mix.
  • Furthermore, whilst the dawn of Level 3 to 5 vehicles will result in liability discussions between manufacturers and developers of constituent parts of the vehicles, the current levels of autonomous vehicles (those at Level 2) present other liability issues.
  • Bearing in mind the circumstances of the two recent incidents, it is entirely possible that a claim could be founded on the basis of negligent training of the driver, inadequate instructions from the retailer/manufacturer and failure to guard against foreseeable misuse.
  • We have previously set out that a driver operating a Level 2 vehicle is expected to have detailed knowledge of the capabilities of their vehicle and how to take over control at the moment, and that marketing terms such as "Autopilot" may be creating false impressions as to the limitations of these vehicles.
  • The distinction between Level 2 and Level 3 vehicles is increasingly nebulous as to what differing responsibilities lie with the driver.
  • As the Uber crash involved a test vehicle, and the Tesla crash was purportedly caused by a failure by the driver to respond to warning systems in the car, it is unlikely that the crashes will result in any recalls of those vehicle models. However, the product recall of automated vehicles could become large and complex.In the event that a hardware fault is identified following a series of crashes, this might not be limited to a single manufacturer or a single country anymore.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.