The Trolley Problem, a thought experiment in which the individual determines the value of human life, deciding whether it is worth sacrificing an individual in order to save 5. In our growing world, a similar problem arises with the innovation of self-driving cars. In the case of an unavoidable accident, who does the car decide to kill, its owners, or others? Does this decision change depending on the number of people involved, age, gender? When it comes time to program this new era of self-driving vehicles, we must be able to create some formula to determine what is the best action to take, and in this, there are 2 main theories: Utilitarianism, and Kantian Deontology.
Utilitarianism dictates that actions are right if they are useful or for the benefit of a majority (Mill, 464). On the other hand, Kantian Deontology tells us that sometimes the best action does not fall in line with the best option for the majority (Kant, 153-154). In the context of life and death, we cannot value an individual’s life over another’s, therefore in the programming of self-driving cars, a utilitarian approach must be utilized in determining what a car should do in this situation.
Do some lives have a greater value than others? How can we determine the number of lives an individual is worth, and similarly, should the car’s occupants be given a higher value when determining who should be killed? In the context of a self-driving car, suppose your car is heading straight into the crosswalk and will kill 5 innocent pedestrians. Should the car save you and kill the 5 pedestrians, or should the car run into a telephone pole killing you, but saving the 5 innocent people. Kant would argue that we would want to save ourselves because we value our life more than 5 strangers. Utilitarianism on the other hand, would disagree with this outcome. Taking the same situation into consideration, utilitarianism would argue that the value of 5 lives outweighs the value of an individual, therefore, the car should be programmed to run into the telephone pole, killing its occupant, and saving the 5 innocent pedestrians.
Who are we to say that an individual’s life has more value than another’s, and even more so, how many lives is an individual worth. In the case of an unavoidable accident, the car should take whatever action results in the lowest number of deaths. The Greatest Happiness Principle dictates that the best action is the one in which the greatest happiness can be derived, in other words, the greatest good for the greatest number of people. In the context of determining whether to kill the individual occupant of the car or 5 innocent pedestrians, the simple answer is that 5 lives outweigh the lives of the individual. When it comes to driving, the law dictates that the car must always yield to the pedestrian since the driver of a car has more control over a situation. In this trolley problem-like situation, the pedestrian is innocent and is forced into being a part of this problem. If a car must normally yield to pedestrians, then so must a self-driving car in its assessment of human life, however the number of lives in question must also be considered.
Is a utilitarian perspective always the best choice? Aren’t there situations where it is difficult to determine which action results in the greatest good for the greatest number of people? What if the value of the car’s occupant is worth more than the pedestrians? If the “driver” is a doctor that saves lives daily, isn’t their life worth more than just one? By saving the doctor, you are indirectly saving countless lives in the future through the doctor’s aid. In the context of an unavoidable accident with a split decision to make, this perspective should not be taken, as we don’t know enough about the lives of the pedestrians to determine whose life is worth more.
Therefore, the car must be programmed to take things at face value: the number of lives taken or saved in that particular event. Another possible “exemption” to consider is if the occupants of the car include young children; Should their life be granted a higher value? I believe that young children should be given some additional value in comparison to your average person, however, not enough to value them as worth more than 1 person.
Finally, we must also consider whether the “driver’s” life has any additional value? From the standpoint of a buyer, why buy the self-driving car and guarantee your death if the car values the pedestrian? This is where the idea of determining value becomes tricky as we move into the idea of the value of an individual’s life vs other individuals.
Overall, self-driving cars are more capable than human drivers as it can remove a majority of external factors that affect driving, leading to an overall safer driving experience through the ability to react faster and notice dangers before humans could.
Self-driving cars have been proven to be safer than the average person as they can recognize and react to obstructions and hazards before humans can. When an accident is unavoidable and will result in the death of the car’s occupant(s) or others, we must utilize a utilitarian perspective in determining who should be saved. Firstly, it follows the principle of the greatest good for the greatest number of people, where it comes down to a numbers game, evaluating whether the number of lives saved is reason enough to kill the car’s occupants.
The bottom line is that 5 lives is worth more than 1, and as such the lives of the many should be valued more than the life of 1 (the car’s occupant). Secondly, it is near impossible to determine what value an individual has beyond face value as we do not know the life the pedestrians lead, therefore, no individual should be worth multiple lives. In any event, when the new era of self-driving cars is programmed, they should utilize a utilitarian perspective, making decisions based on the number of lives saved in comparison to those sacrificed.