Creating the Ethics of Self-Driving Cars

Love Tsai, Applied Science , July 12, 2020

Self-driving cars require reliable ways to assess the ethics of acting in certain ways in both low and high impact events. With nobody behind the literal and proverbial steering wheel anymore, philosophers and scientists must work to find an adequate ethical system before autonomous vehicles hit the road.

Image retrieved from: https://commons.wikimedia.org/wiki/File:Waymo_self-driving_car_front_view.gk.jpg

Recent findings in a study done by the American Automobile Association (AAA) report that almost three-quarters of the U.S. population is afraid of self-driving cars, otherwise known as autonomous vehicles (AVs). Americans were more likely to be afraid of AVs if they had not had previous exposure to automated mechanisms, such as lane keeping assistance or parallel parking assistance.2 The study showed that, in general, the fear of the unknown plays a very large role in how people approach technological development. Another commonly cited reason for this fear is a lack of trust in the car itself and its decision-making capabilities: if people were to start using AVs, they would be giving up the ability to make judgment calls for themselves.2 Many philosophers researching the decision-making mechanisms of self-driving cars echo this very concern and are looking for the best model to implement in the cars of the future.

In particular, philosophers are researching the ethics of artificial intelligence. After decision-making is placed into the care of these programs, there must be a reliable system of cost-benefit analysis that makes the “right” decisions. This has proven to be a difficult battle, as researchers ponder over definitions of “right” and most efficient. Where does the burden of responsibility lie? Who is of priority to the car? Things people usually take for granted must now be analyzed in detail. In the traditional Utilitarian school of philosophy, “right” means decreasing damage as much as possible: if an accident involving two pedestrians and one driver had to hurt one group to protect the other, the driver—the owner of the vehicle—would be sacrificed for the pedestrians. This would decrease damage because there would only be one death vs. two. Utilitarian ethics, then, would render this as the “correct” judgment. Other models of morality aim to program the car to be “selfish,” protecting the owner of the vehicle. In the case mentioned above, the same conditions would lead to a different result, with the driver being prioritized above the two pedestrians.

Conversations surrounding the ethics of AVs often follow this trolley-problem thought experiment prototype: given two parties, who is saved and why? However, philosopher Veljko Dubljević argues for a different approach in his recent paper “Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.”1 Dubljević advocates for an agent-deed-consequence (ADC) model to be implemented instead, as it allows for a fuller context of the situation to influence the outcome of the accident. The ADC model uses an equation with three variables (agent, deed, and consequence) to compute the morality of any given decision. Each variable is given a positive or negative weight to signify its ethical value separate from the context of the actual scenario.2 This is significant because moral judgment is often done quickly and intuitively in human behavior by weighing the various aspects of an event. For example, a man stealing luxury watches to re-sell is often judged more harshly than a man stealing food to feed his family. In those two scenarios, the agent and deed are roughly the same, but the consequence—what happened as a result of the event—is drastically different. The ADC model is supposed to be able to replicate that sort of ethical subtlety in its application to new technology.2

The ADC model is also relevant because the current binary implementations of utilitarianism and the “selfish” models detailed above can be exploited quite easily. Dubljević gives the example of criminals using cars to ram into people on boardwalks or in dense cities. The “selfish” model would have protected the drivers, killing more people. Using weights to mimic body mass and a car full of passengers, criminals could trick and also manipulate the utilitarian model.1 On the other hand, the ADC model of moral judgment would be able to take into account three other contextual clues. In this specific case, the agent, deed, and consequence would all be deemed negative and the entire situation labeled as “high-impact” (significant possible danger to human life). With this information available, the car is less prone to manipulation by criminals who can specifically plan attacks with the limitations of utilitarian or “selfish” models in mind.

The conversation is far from over. Numerous questions arise with the implementation of an ADC model as well, such as the feasibility of creating a universal standard for moral judgment (the context uniquely necessary in ADCs) or even the tradeoffs between normative and descriptive ethics. In the end, as technology gets closer and closer to being able to put self-driving cars on the road, philosophers will be scrambling to finish this quest as well: just how can machinery operate ethically?

References

  1. Dubljević, V. (2020). Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Science And Engineering Ethics. https://doi.org/10.1007/s11948-020-00242-0
  2. Dubljević, V., Sattler, S., & Racine, E. (2018). Correction: Deciphering moral intuition: How agents, deeds, and consequences influence moral judgment. PLOS ONE, 13(10), e0206750. https://doi.org/10.1371/journal.pone.0206750
  3. Edmonds, E. (2020). Three in Four Americans Remain Afraid of Fully Self-Driving Vehicles | AAA NewsRoom. AAA NewsRoom. Retrieved 13 July 2020, from https://newsroom.aaa.com/2019/03/americans-fear-self-driving-cars-survey/.
  4. North Carolina State University. (2020). What ethical modelss for autonomous vehicles don’t address, and how they could be better. ScienceDaily. Retrieved July 6, 2020 from https://www.sciencedaily.com/releases/2020/07/200706152708.htm

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *