Who will be killed by a self-driving car? Machine, save as many people as possible, but most of all, save me!
Technologies

Who will be killed by a self-driving car? Machine, save as many people as possible, but most of all, save me!

If a situation arises where the autonomous system of a car has to make a quick choice of whom to sacrifice in the event of an imminent accident, how should it react? Sacrificing passengers to save pedestrians? If necessary, kill a pedestrian to spare, for example, a family of four traveling in a car? Or maybe he should always protect himself first?

While over sixty companies have already received personal testing permits in California alone, it's hard to say the industry is poised to face ethical dilemmas. At the moment, he is struggling with more basic problems - the operation and navigational efficiency of systems and simply avoiding collisions and unforeseen events. In situations like the recent murder of a pedestrian in Arizona, or subsequent crashes (1), so far it has been simply about system failures, and not about some kind of "ethical choice" of the car.

Save the rich and young

The issues of making these kinds of decisions are not abstract problems. Any experienced driver can certify this. Last year, researchers from the MIT Media Lab analyzed over forty million responses from respondents from around the world, which they collected in the course of research launched in 2014. The poll system they call the "Ethical Machine", showed that in various places in the world, similar questions are asked different answers.

The most general conclusions are predictable. In extreme situations people prefer saving people to caring for animals, aiming to save as many lives as possible, and tend to be younger than the elderly (2). There are also some, but less obvious, preferences when it comes to rescuing women over men, higher-status people over poorer people, and pedestrians over car passengers..

2. Who should the car save?

Since almost half a million respondents filled out demographic questionnaires, it was possible to correlate their preferences with age, gender and religious beliefs. The researchers concluded that these differences did not "significantly affect" people's decisions, but noted some cultural influences. The French, for example, tended to weigh decisions on the basis of the estimated number of deaths, while in Japan the emphasis was the least. However, in the Land of the Rising Sun, the life of the elderly is valued much more than in the West.

“Before we allow our cars to make their own ethical decisions, we need to have a global debate about this. When companies working on autonomous systems learn about our preferences, then they will develop ethical algorithms in machines based on them, and politicians can begin to introduce adequate legal provisions, ”the scientists wrote in October 2018 in Nature.

One of the researchers involved in the Moral Machine experiment, Jean-Francois Bonnefont, found the preference for rescuing people of higher status (such as executives over the homeless) to be alarming. In his opinion, this is very much related to the level of economic inequality in a given country. Where inequalities were greater, preference was given to sacrificing the poor and the homeless.

One of the previous studies showed, in particular, that, according to respondents, an autonomous car should protect as many people as possible, even if it means losing passengers. At the same time, however, the respondents stated that they would not buy a car programmed in this way. The researchers explained that while people find it more ethical to save more people, they are also self-interested, which could be a signal to manufacturers that customers will be reluctant to buy cars equipped with altruistic systems.. Some time ago, representatives of the Mercedes-Benz company said that if their system saved only one person, they would choose the driver, not the pedestrian. A wave of public protest forced the company to withdraw its declaration. But research clearly shows that there was much hypocrisy in this holy indignation.

This is already happening in some countries. first attempts at legal regulation in field. Germany has passed a law requiring driverless cars to avoid injury or death at all costs. The law also states that algorithms can never make decisions based on characteristics such as age, gender, health, or pedestrians.

Audi takes charge

The designer is not able to predict all the consequences of the operation of the car. Reality can always provide a combination of variables that have never been tested before. This undermines our faith in the possibility of "ethically programming" a machine at all. It seems to us that in situations where an error occurs and a tragedy occurs "due to the fault of the car", the responsibility should be borne by the manufacturer and developer of the system.

Perhaps this reasoning is correct, but perhaps not because it was wrong. Rather, because a movement was allowed that was not 2019% free from the possibility of making it. That seems to be the reason, and shared responsibility is not shirked by the company, which recently announced that it would take responsibility for accidents involving a 8-year-old A3 while using an automatic Traffic Jam Pilot (XNUMX) system in it.

3. Audi Traffic Jam Pilot interface

On the other hand, there are millions of people who drive cars and also make mistakes. So why should machines, which statistically make far fewer mistakes than humans, as evidenced by numerous mistakes, be discriminated against in this regard?

If anyone thinks that the dilemmas of ethics and responsibility in the world of autonomous vehicles are simple, keep thinking...

Add a comment