Stuart Young refers to some detailed research and the reality behind driverless vehicles and looks at ways of regulating moral programming in order to move forward in this area.

Our cars are set to change dramatically in the coming years. Autonomous and connected vehicles will be the future of personal travel. They promise better mobility for all, reduced congestion, an improved environment and, perhaps most importantly, increased safety.

The large majority of road traffic accidents are caused when a driver makes the wrong choice. Autonomous vehicles (AVs) may never attain a flawless safety record, but by removing the driver from the decision-making process they will remove the source of most errors and therefore significantly improve safety i.e. remove the human, remove the risk.

Yet, before we begin to delegate such decision-making to autonomous vehicles we need to ask what decisions they can reasonably make and on what basis.

The technology within AVs will use an algorithm to make sure damage to both machine and human is minimised. Academics and the press have referred to this type of mathematical solution to ethical decision-making as the 'moral algorithm'.

As part of UK Autodrive (the largest of three separate consortia that are currently trialling automated vehicle technology), my firm is contributing to the thinking on societal and legal issues around the development of autonomous vehicles by developing a series of white papers. This article summarises the content of the latest in the series on 'The Moral Algorithm'.

The reality of creating a moral algorithm

Research suggests that while academic debates on the moral algorithm may be intellectually stimulating, they run ahead of the technology being programmed into today's AVs.

This sentiment is echoed by members of the UK Autodrive consortium, who stress that not only is it technologically impossible to programme an algorithm with an infinite number of moral values, but that it is not something any government would ever sign up to. Unless the social norms of society completely change, no minister is going to stand up in Parliament and say it is better to kill one person over another.

'The trolley problem' and why it's overstated

Thus far, consideration of the moral algorithm has tended to be focused on the 'trolley problem'. The trolley problem is a thought experiment in ethics which says:

"There is a runaway trolley racing down the railway tracks. Ahead of it there are two people who are tied to the tracks and unable to move. If nothing else happens, they will be killed. However, you are standing in the signal box next to a lever that will switch the trolley to a side track. On the side track is one person, who also cannot move. Will you take responsibility for pulling the lever that will kill that one person, or do nothing and allow the two to die?"

In the world of autonomous vehicles, it might be rephrased in this way:

The trolley problem in an AV world:

There is an AV driving down the road. Someone else pulls into the road unexpectedly. The AV does not have the necessary stopping distance available to pull up safely, so has to steer to one side or the other.

On one pavement is an 80-year-old woman, on the other is a group of children. Which party does the AV choose to put at risk?

Beyond the trolley problem

Although 'the trolley problem' grabs headlines when discussing the moral values of driverless vehicles, AVs will potentially have much wider scope for decision-making.

As drivers operating in this complex environment the decisions we make on the road are reflective of each of us as individuals. Currently, you drive your own car in your own way - driving is seen as an extension of your independence and personality.

Therefore, it will be important to widen thinking about the moral algorithm to encompass the many lesser decisions that we make as part of a journey.

For example, AVs will need to decide whether to enter an area marked with chevrons and bordered by a solid white line in order to pass a parked car, or break the speed limit to get out of the way of an ambulance. These sorts of actions involve breaking road traffic laws in the interests of a greater good such as keeping traffic flowing or giving priority to emergency vehicles.

The moral algorithm will also need to be capable of dealing with countless situations which do not entail a breach of the rules of the road, but do concern the interaction of vehicles with others using the same congested road space.

These are currently matters of driver choice, and are reflected in different driving styles on the road - including, for instance, how aggressively a vehicle accelerates or brakes, how sharply it changes lanes or performs other manoeuvres, how much room it allows to other vehicles to do the same things, and so on.

These are moral questions because they concern where drivers choose to place themselves on a spectrum between the pure maximisation of self-interest (getting from A to B as quickly as possible) and an altruistic deferral to the interests of other road users. In practice, the same individual may well adopt a mix of styles, or drive differently on different days as mood or circumstances dictate.

AVs will, however, need to be programmed to deal consistently with these real world driving issues. Examples of good practice include 'zip-fastener' merging of slow traffic, taking turns where traffic lights have failed at an intersection, giving way to other vehicles emerging from side roads, stopping at zebra crossings, slowing down for horses and allowing room to cyclists when overtaking.

And they will need to do this in a considerably more complex environment, as they will be required to drive on busy roads and interact with other cars (some manual, some autonomous), road users (such as cyclists) and pedestrians.

Levels of autonomy

Level 0 - Driver only

Driver is responsible for the vehicle. Controls lateral and longitudinal movement at all times.

Level 1 - Driver assistance

Driver is responsible for the vehicle. Controls lateral and longitudinal movement at all times. However, the system can support lateral OR longitudinal control.

Level 2 - Advanced driver assistance

Driver is responsible for the vehicle. Controls lateral and longitudinal movement. May hand some control over to the system. Must actively monitor system performance and retake full control where necessary.

Level 3 - Advanced driver assistance

Driver is responsible for the vehicle. Controls lateral and longitudinal movement. Can hand full control to the system. Must actively monitor system performance, retaking control as necessary.

Level 4 - Highly automated

Driver is only responsible, and exercises control, outside of specific use cases where the car is able to self-drive.

Level 5 - Fully automated

System can control lateral AND longitudinal movement in all use cases. Driver intervention is not needed

There are generally recognised to be six steps to full autonomy, all of which progressively increase safety.

There are already countless examples of vehicles with some element of automation already on the road, such as self-parking or adaptive cruise control.

In spite of this, conventional cars will also be around for many decades, so AVs will have to be able to co-exist with them. It is possible that there could be separate roads or lanes for AVs, but this option would be costly and in many places unrealistic. So the ideal solution is for AVs to be able to share the road with their less 'intelligent' ancestors.

So, knowing that AVs will be sharing the roads with conventional vehicles for many years ahead, on what basis should a set of moral values be programmed into a vehicle? Should we give greater priority to consistency, or should we reflect the individual preferences of travellers in that vehicle, to the extent that the technology is capable of doing so?

Personal safety is paramount

Those involved in AV development agree that it is almost impossible to conceive of a transport system with a 100% safety record.

Yet, if the principal selling point of AVs is their contribution to increases in safety, the level of risk we are willing to accept may actually be quite low.

In a survey, researchers found that people wanted to drive around in vehicles that would protect them and their family at all costs and did not approve of regulation that would compromise the safety of passengers in order to minimise casualties on the part of other road users.

The researchers concluded that regulation that enforced a purely utilitarian moral algorithm would delay the uptake of AVs to such an extent that "lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether".

Ultimately, given the safety imperative, the policy goal must be to encourage the take-up of as many AVs as possible, as quickly as possible. This requires co-ordination at state level, consistency, and a clear legal framework which facilitates and encourages AV technology.

Co-operation is key

AVs not only make demands on car manufacturers' research and development teams, but also on the way their retail arms sell the end product. Progress and take-up will be improved if manufacturers work together.

If all the stakeholders in the AV market really believe that automation will equal a safer, more ethical transport system, then they will have to find a way to work together to achieve this, while protecting and developing their perceived unique selling points.

Regulation

Although the industry may be improving co-operation, this by itself is not enough. The safety imperative points to a need for state regulation to facilitate and encourage the take-up of AVs by regulating the moral algorithm rather than assuming a form of informal self-regulation by manufacturers.

In order to achieve the economic, mobility and safety benefits of AVs, there needs to be a new and appropriate legal and regulatory framework that is specifically designed for the purpose. Regulation needs to lay a clear path so that existing and new participants in the market can safely move forward, literally and metaphorically, by securing public trust and ensuring that AV manufacturers are not working at legal risk. Regulation will need to balance the potentially competing factors of adherence to the law, safety and maximising take-up.

Our experts all believe that the current lack of bespoke regulation leaves the industry in a state of limbo.

In a recent consultation on proposals to support advanced driver assistance systems and automated vehicles, the government stated that it would "continue to regulate in a rolling programme of reform. This will help to facilitate the introduction of innovative new technologies in a safe, agile and evidence-based manner for the benefit of UK consumers and business."

Doubt delays development

Research shows that waiting for a shift in public opinion in order to at least set the boundaries for a regulatory regime could damage the progress of AV development.

It is therefore important not to let the trolley problem monopolise the debate regarding the moral algorithm. The trolley problem is a distraction from the much larger category of day-to-day moral decisions and the need to develop solutions to them.

There are numerous other hurdles to jump, including problems with liability, cyber-security and public trust. It is particularly in the area of public trust where the car industry and governments around the world must start a conversation that coalesces differing moral and ethical standpoints.

That is why 'A programme of public education and consultation' is one of several areas for consideration suggested in our recent white paper.

Other suggestions include:

  • The formulation of a coherent plan for allocating regulatory responsibility for AVs, and in particular the moral algorithms that govern their behaviour.
  • Consideration of how regulation - which ultimately requires the oversight of both vehicle and road network regulation in an integrated way - can best be implemented across the various tiers of UK government including the devolved administrations.
  • The creation of a legal framework that is sufficiently flexible and responsive to be able to make swift decisions about new technology as it is developed, within a clear decision-making framework
  • As part of the development of the legal framework, co-operation with the relevant international bodies to assist in the development of international standards for AVs.
  • Consideration of how co-operation between sector participants can best be facilitated (or required where necessary) to ensure that AV development takes place as rapidly as possible and that connected vehicles (CVs) and AVs can interact with each other.
  • The formulation of a coherent short, medium and long term plan regarding the deployment of AVs on public roads in the UK,
  • Development of a policy regarding how the moral algorithm will operate in terms of major safety situations.

Given the technical challenges of connected and autonomous vehicles, it is important that, before all of these recommendations are implemented, the first step is to realise that regulatory decisions need to be taken, and they need to be taken soon.

This article was originally published in Computers & Law magazine in February 2017.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.