Skip to Content, Navigation, or Footer.
Support independent student journalism. Support independent student journalism. Support independent student journalism.
The Dartmouth
April 20, 2024 | Latest Issue
The Dartmouth

Goldstein: Ethical Artificial Intelligence

Self-driving cars are fast becoming a reality. While the safety ramifications of these cars are generally considered positive because of the unpredictable irrationality of human driving, there are moral questions about their potential actions. What if the car was confronted with the choice between killing five pedestrians or ramming into a wall, saving the five but killing its passenger? A recent paper published in the science journal arXiv deals with this topic — similar to the trolley problem that poses the more passive killing of five against the active killing of one — concluding that manufacturers and psychologists will have to collaborate on instituting proper guidelines for the cars’ actions in such a scenario. If this conclusion sounds unsatisfactory, it’s because it is. It gives us no insight into what the car should do if confronted with this choice.

It does, however, give us one interesting piece of data. The paper’s team surveyed people online, and the majority of respondents chose a utilitarian way out — the car should just try to minimize the number of total deaths. The number of people who advocated for this, however, dropped considerably when they were asked to imagine that it was them in the driver’s seat. So, how do we decide? How should we decide?

First, we must consider whether there is any difference in the inherent value of the lives inside and outside of the car. The principled answer would be no — of course, all lives should be weighed the same. But if we take a more consequentialist look at this, we may see some change. What if, for example, the person in the car is a country’s president? Should the car then kill her to spare the five people? I would guess that we are a little more reserved in electing to sacrifice the public figure than we are to sacrifice Joe Average. But even if the person in the car is not a public figure, is it the car’s duty — being that person’s car — to save that person? Or is it the car’s mandate as a safer driving alternative to take the more secure route and merely try for the smallest number of casualties?

This question is one of responsibilities, and like the trolley problem before it, one of active versus passive decision-making. While a utilitarian would likely see no difference in the deaths of different people, it is only because she’d be concerned with the end result. I contest, however, that the motivation behind the action is at least as important. If the car was already on a route that would kill five people — and thus passive in its movement — would that really be worse than the car taking active control in order to kill one person? This is very much like the distinction between manslaughter and murder. Yet, then another conundrum arises — if the car knows that it is on a collision course with the five people, as we must assume an autonomous car would, it is more a choice between two equal options rather than a choice versus a non-choice. Because if the car is going to have to actively do things to drive anyway, what does it matter whether one of those things is turning the steering wheel? For the vehicle to hit the five people, it would have to actively maintain a straight course, its gear, the engine and a host of other mechanics. Whereas the human in the trolley problem could hypothetically not affect anything about the situation if she chose, the car here has no choice but to affect the situation — it is the situation.

I pose a challenge to even the strictest of utilitarians — what if the choice was between killing one pedestrian and killing the sole passenger in the car? This hypothetical erases any consideration of greatest good for the greatest number, and is nearly — if not totally — unresolvable. So I propose this: let’s take a modified utilitarian stance. Let’s give the car a reliable way to calculate the probability of death for each of the people who it might choose to kill. If the one pedestrian was going to be clipped and not hit head-on, but the passenger would be instantly vaporized upon impact into a wall, the probability of death for the pedestrian would likely be lower. Instead of greatest good for the greatest number, let’s go with greatest total probable good.

The bottom line is, if these cars are going to be smart enough to take us places, they’d better be smart enough to make decisions not even the smartest of humans could make on a split-second basis. That’s the artificial intelligence to which humanity should aspire.