The (not so) perfect choices of the autonomous car
The Trolley problem
Two children are playing on the sidewalk. Suddenly the ball they are playing with, bounces away, and the two kids cross the street without looking. A passing autonomous car computes it can’t stop anymore and has but two choices: run over the kids, and thus killing them, or running into a truck coming from the opposite direction, and thus probably killing the driver of the car. What should the car do?
This dilemma is not new, but in his book ’21 lessons for the 21st century’, Yuval Noah Harari, the Israeli historian/philosopher with a popstar status, answers with some thought-provoking considerations.
The dilemma of our autonomous car is not recent, it’s a centuries old thought experiment in ethics, known as the Trolley problem. It opposes two schools of moral thought: utilitarianism and deontological ethics. Or in common English: should a choice be made based on a deontology or on the basis of the most positive outcome.
The autonomous car adds a very real dimension to this discussion: the discussion is no longer a theoretical discussion between philosophers such as Immanuel Kant, but a very real problem, with life or dead consequences.
Lousy philosophers, lousy drivers
Humans don’t react based on philosophical views. Emotions and instincts prevail. This can be illustrated by an experiment in the US in 1970 involving theology students. On short notice, they were invited to come to the classroom and present the parable of the good Samaritan (an anecdote from the Bible, telling the story of a Jew who was robbed and beaten; he was neglected by his own people but taken care of by a Samaritan, although Jews and Samaritans were enemies). The organisers of the experiment placed an actor on the way to the classroom. He played a poor man asking for help. But our students didn’t help, they were focussed on their presentation and neglected the moral obligation to help a person in need. To be themselves good Samaritans.
Human beings are not only bad in philosophy but also in driving. Well over a million people die in car accidents worldwide. Because drivers are angry, aggressive, distracted, stressed, uncapable, unexperienced, … even with all the calls for ethical and responsible behaviour.
Computers and algorithms are not driven by instincts, they follow the strict procedures they got as input. Autonomous cars would strictly follow the ethical procedures, if we would/could translate ethics into figures, statistics and IT codes. Society could ask philosophers to come up with an ethical guide for autonomous cars. Every time they run into a dangerous situation, those cars would make choices based on that strict ethical procedure.
Will that procedure be perfect in every situation? Probably not. But would that procedure do better than the human drivers nowadays? Most probably yes. Don’t forget: now we have well over a million deaths per year.
Altruist or Selfish?
Ok, let’s assume we have the technology for an autonomous car that can follow a clear ethical procedure. Then the question is: who will write that procedure? Philosophers? They don’t agree about ethical dilemmas. We already mentioned Immanuel Kant, who wants to base decisions on absolute rules, as opposed to John Stuart Mill who wants to decide according to the consequences of the different options.
Or do we leave the choice to the producers? Can Tesla and Toyota decide? Or maybe we can let consumers decide? We could give the option: do you want an altruist car or a selfish car? Applied to our opening dilemma: do you want a car that lets the kids live or one that saves the life of its owner?
In 2015 a study was launched to put this very question to a number of people. When asked what their (theoretical) preference was, most replied they opted for the altruist car. When asked what car they would buy, most respondents opted for the selfish car. No surprise there.
A logical choice would be to let the government decide. Authorities would write a moral code for autonomous cars and every car would follow the same rules and procedures. For the first time in history, a law would be strictly respected in all circumstances. This might look as a good idea, but historically, the weakness of enforcing the law has always been a barrier to political abuse and totalitarian regimes. Authoritarian regimes have always tried to concentrate all information and power in one place, the technology of the 21st century would allow this.
One set of rules and procedures that all cars respect to the fullest, has many advantages: all cars respect all traffic regulations, all cars are interconnected and thus know how the other will react, all reactions will be rational. In summary: the death toll of road traffic will drop significantly. On the downside, all power and all information will be centralised. Privacy is more vulnerable than ever. And responsibility lies no longer with the individual driver but with those that told the autonomous cars how to react.
What choices will society make? How will we make them? What price are we willing to pay?
How will insurance companies react to this shift?
The arrival of autonomous cars not only raises some ethical dilemmas, it also creates new challenges for insurance companies. In the past, insurers have based their business model on three competitive advantages: Risk pooling, spreading of financial risks evenly among a large number of contributors. Risk timing, the ability to even out risk over time. And risk assessment, using historical causal event data for loss probability.
As no comparable information exists for autonomous vehicles, the insurance industry must redesign its current pricing structure to cover the decisions made by a machine instead of a human. According to several experts, autonomous vehicles will help reducing accidents, as the vast majority of them are caused by humans (human error is involved in about 95% of all road traffic accidents in Europe). Does this mean that the historic business model of insurance companies will not be viable in the future?
Insurers will most likely have to focus on covering new risks. For instance, a failure in the software processing the collected data, which could lead to accidents. A cyberattack, where the hacker takes control of the car, can also occur. The hacker can also take possession of sensitive user information, including travel patterns, and so forth. This drives insurers to develop a whole new range of products, protecting privacy and personal data.