FYI.

This story is over 5 years old.

Tech

How Self-Driving Car Policy Will Determine Life, Death and Everything In-Between

The time to think about ethics in autonomous vehicle accidents is now.
Image: Shutterstock

Brett M. Frischmann is a Charles Widger Endowed University professor in Law, Business and Economics at Villanova University, and Evan Selinger is a professor of philosophy at Rochester Institute of Technology. They are co-authors of Re-Engineering Humanity,

Cambridge University Press: forthcoming in April 2018.

Self-driving cars are here. More are on their way. Major automakers and Silicon Valley giants are clamoring to develop and release fully autonomous cars to safely and efficiently chauffeur us. Some models won’t even include a steering wheel. Along with many challenges, technical and otherwise, there is one fundamental political question that is too easily brushed aside: Who decides on how transportation algorithms will make decisions about life, death and everything in between?

Advertisement

The recent fatality involving a self-driving Uber vehicle won’t be the last incident where human life is lost. Indeed, no matter how many lives self-driving cars save, accidents still will happen.

Imagine you’re in a self-driving car going down a road when, suddenly, the large propane tanks hauled by the truck in front of you fall out and fly in your direction. A split-second decision needs to be made, and you can't think through the outcomes and tradeoffs for every possible response. Fortunately, the smart system driving your car can run through tons of scenarios at lightning fast speed. How, then, should it determine moral priority?

Consider the following possibilities:

  • Your car should stay in its lane and absorbs the damage, thereby making it likely that you’ll die.
  • Your car should save your life by swerving into the left lane and hitting the car there, sending the passengers to their deaths—passengers known, according to their big data profiles, to have several small children.
  • Your car should save your life by swerving into the right lane and hit the car there, sending the lone passenger to her death—a passenger known, according to her big data profile, to be a scientist who is coming close to finding a cure for cancer.
  • Your car should save the lives worth the most, measured according to amount of money paid into a new form of life assurance insurance. Assume that each person in a vehicle could purchase insurance against these types of rare but inevitable accidents, and then, smart cars would prioritize based on their ability and willingness to pay.
  • Your car should save your life and embrace a neutrality principle in deciding among the means for doing so, perhaps by flipping a simulated coin and swerving to the right if heads comes up and swerving to the left if its tails.
  • Your car shouldn’t prioritize your life and should embrace a neutrality principle by randomly choosing among the three options.
  • Your car should execute whatever option most closely matches your personal value system and the moral choices you would have made if you were capable of doing so. Assume that when you first purchased your car, you took a self-driving car morality test consisting of a battery of scenarios like this one and that the results “programmed” your vehicle.

Advertisement

There’s no value-free way to determine what the autonomous car should do. The choice presented by options 1–7 shouldn’t be seen as a computational problem that can be “solved” by big data, sophisticated algorithms, machine learning, or any form of artificial intelligence. These tools can help evaluate and execute options, but ultimately, someone—some human beings—must choose and have their values baked into the software.

Read more: Tempe Police Release Footage of Fatal Uber Self-Driving Car Accident

Who should get decision-making power? Should it be politicians? The market? Insurance companies? Automotive executives? Technologists? Should consumers be allowed to customize the moral dashboard of their cars so that their vehicles execute moral decisions that are in line with their own preferences?

Don’t be fooled when people talk about AI as if it alleviates the need for human beings to make these moral decisions, as if AI necessarily will take care of everything for us. Sure, AI can be designed to make emergent, non-transparent, and even inexplicable decisions. But since the shift from human drivers to passive passengers in self-driving cars shifts decision-making from drivers to designers and programmers, governance remains essential. It’s only a question of which form of governance gets adopted.

The critical social policy questions need to be addressed proactively while systems are being designed, built, and tested

Advertisement

The scenario we’ve described is based on an old philosophical thought experiment called the trolley problem. In the original experiment, a person is faced with the decision about pulling a level to divert a trolley from one track to another and in doing so, save five lives but take another. MIT developed a modern interactive version called the Moral Machine.

It’s not surprising that the trolley problem comes up in virtually every discussion of autonomous vehicles. To date, the debate has primarily focused on death-dealing accidents and raised important questions about who gets to decide who lives and dies. Some insist that the question of who decides must be resolved before autonomous cars are given free rein on the roads. Others argue that such decisions concern edge cases and should be deferred to the future so that innovation won't be stalled. And some deny that the trolley problem scenarios are even relevant, once super smart braking systems are built into each car.

The critical social policy questions need to be addressed proactively while systems are being designed, built, and tested. Otherwise, values become entrenched as they’re embedded in the technology. That may be the aim of denialists pining for perfectly safe systems (unless they’re truly deluded by techno-utopian dreams). The edge case argument is more reasonable if you focus exclusively on the trolley problem dilemma. But the trolley problem captures one small albeit important piece of the puzzle. To see why, we need to consider scenarios that don’t involve life-or-death decisions.

Advertisement

Let’s focus on accidents. Self-driving cars will reduce the number of accidents, but again, do not be fooled by the siren’s call of perfection. There still will be accidents that cause:

  • considerable bodily loss, such as the loss of limbs, but not death;
  • considerable bodily damage that disables the injured person for 24 months;
  • considerable mental damage that limits the injured person's ability to ride in an automobile and forces the person to use less efficient modes of transportation;
  • considerable damage to the person's vehicle; or
  • damage and delays.

Assume that the smart system driving your car is presented with various options that allocate these costs according to the logics reflected in the death-dealing accident scenario. Again, there’s no value-free way to decide, and it’s not an ad hoc decision. Engineers will embed the ethics in decision-making algorithms and code. Again, society must determine how to proceed proactively. Keep in mind that this governance issue is not about assigning fault; it is only about how to determine moral priority and who should bear the social costs. (Of course, as we transition to smart transportation systems over the next few decades, determining fault may be quite important.)

Read more: Who's Guilty When a Brain-Controlled Computer Kills?

Now, put aside accidents, and still, there are many other costs and benefits that smart transportation systems will be asked to manage. Suppose weather causes a disruption and smart traffic management systems kick in. What should the systems optimize? Should the objective be to minimize congestion or the social costs of congestion? Perhaps letting some folks wait for a while on a fully congested road would allow other folks to get to their destination more quickly. Maybe people should be able to pay for higher priority, in which case their vehicles receive special treatment? Today, police cars, ambulances, and buses sometimes get special treatment. But these narrow exceptions aside, our roads are managed without prioritization. First-come, first-served is the default.

In the future, however, we will be able to make finer discriminations about the identities, destinations, and activities of individual passengers. Armed with this information, would you place some folks in the fast lane and stick others in slower ones? Perhaps a woman on her way to a business meeting should get priority over a woman who is attending her son’s music recital. Or should it be the other way around? The decisions don’t end there. Suppose only one of the drivers is going to make her event on time and the other will arrive too late even if she speeds. Should the smart traffic management system determine who gets to go and inform the other person to stay home? Over time, these sorts of decisions can be expected to occur frequently.

Traffic management is a form of social planning. Decisions that get made in any single instance of solving the trolley problem, or any of the other scenarios we’ve noted, reflect broader governing principles and ethical logics embedded in technology. These decisions aggregate and over time become social patterns. So, don’t be fooled when engineers hide behind technical efficiency and proclaim to be free from moral decisions. “I’m just an engineer” isn’t an acceptable response to ethical questions. When engineered systems allocate life, death and everything in between, the stakes are inevitably moral.