Should Self-Driving Cars Make Life or Death Situations for Us?
With the increase of self-driving cars, autonomous vehicles are expected to make moral tradeoffs for us.
Reading Time: 4 minutes
Pull the lever, and kill one. Stay on your original path, and kill five. The trolley dilemma is one of the most fundamental yet complex questions within ethics. Both sides are extensively debated, but when applied to driving, the problem sparks even more questions. Imagine you’re driving down a road and your brakes are jammed. You have two choices: swerve to the right and put yourself at risk, or continue straight and hit pedestrians. At first, swerving is the obvious decision. How could you ever hit innocent pedestrians? But what if there are passengers in the car with you? As for the pedestrians, does it matter if they’re young, old, pregnant, athletic, a doctor, or a thief? How many pedestrians are you willing to sacrifice? Our decisions change from circumstance to circumstance and person to person. However, with the increase of self-driving cars, autonomous vehicles are expected to make these moral tradeoffs for us.
Our ethical decisions are supposedly unreplicatable within computers. With the rise of artificial intelligence (AI) and the increasing popularity of self-driving cars, people must consider how AI will react to trolley dilemma-like situations and how we expect said AI to react. Researcher Edmond Awad of the University of Exeter, the leader of MIT’s Moral Machine, realized that a viable way to develop a solution to this problem is crowdsourcing information from populations and applying this data to the self-driving car’s algorithm.
However, quantifying ethical decisions is easier said than done. Deep learning is meant to mimic the way our brain works by imposing a neural network, allowing AI to connect decisions from past precedence and “learn” from them. Using deep learning technologies, we can develop self-driving cars to learn and grow from the decisions of their owners and improve with every new input. Surprisingly, the decisions of owners that deep learning relies on can vary vastly from country to country. The researchers at MIT proved this distinction by comparing the responses from 166 countries’ participants, and the differences were drastic in some cases. For example, Brazilians were 66 percent less likely to spare humans than to spare animals, but they chose to save lawful pedestrians two times as much as Americans.
The behavioral expectations of self-driving cars are unique not only from country to country, but also within communities due to cultural and political differences. I decided to focus on this distinction by administering a poll within Stuyvesant, created thanks to the help of researcher Sohan Dsouza, to see the most striking differences between the student body and the average American. The questions focused on the gender, age, and legality of the pedestrians. For instance, when Stuyvesant students were asked to choose between a normal man crossing the sidewalk legally and a runner crossing the sidewalk illegally, only 25 percent would sacrifice the pedestrian, and most chose to sacrifice the jaywalker. The distinction between Stuyvesant students and the average American is surprising: a whopping 70 percent of Americans chose to sacrifice the legal pedestrian over the jaywalker.
However, the questions in the poll were not easy, and the question that took the most time dealt with sacrificing two pregnant women or four children. Stuyvesant students are 10 percent more likely to save the children than the average American. By these distinctions alone, it’s clear that the ethical decisions we expect self-driving cars to make cannot be generalized. These kinds of decisions rely on not just geographical location, but also education and moral upbringing.
This dependency brings up the question: to what extent should we prioritize the morals of the consumer over the morals of the general population? The AI of a self-driving car should be customized based on the owner’s morals. We should mimic the behavior of the self-driving car to result in the same outcome as if the owner was driving the car himself. The MIT Moral Machine was a deeper look into Asimov’s Laws of Robotics, which state that a robot cannot injure a human being and must obey the order given to it by humans except for when the order is to injure human beings. While Asimov’s laws are originally seen within a science fiction frame of reference, we can utilize our own discussion of morals to decide the extent to which we allow the AI of self-driving cars to kill someone and whether or not this is a direct attack on ethics.
The owner of the car should have total control of the car’s moral decisions, despite the breach of Asimov’s laws. This arrangement can be accomplished a number of ways, namely by surveying the owner of the car when they first purchase the vehicle and programming the AI of the car to abide by the owner’s preferences. While critics would rather adopt the morality of the general consensus through polls such as the one the Moral Machine conducted, it’s clear that because morality differs so vastly, it’s unfair to force the drivers to make decisions that they wouldn’t have made, just because of a population preference. However, before we put our morality in the hands of AI, we must determine where our morality lies and whether or not we’re confident with our own decisions. Self-driving cars should not make judgments of the value of people’s lives beyond what is objectively true.