There is an old thought experiment called the Trolley Problem that’s become central to the development of autonomous cars. In the context of self-driving cars, it sets up a scenario where an autonomously-operated vehicle approaches, say, a nun herding a group of orphans from a burning hospital. There is no time to stop or room to maneuver around the group. The car must therefore choose whether to run over the nuns and orphans, likely killing them, or swerve into the burning building, likely killing the passengers.
What should the car do?
On October 7th, Christoph von Hugo, manager of driver assistance safety systems at Mercedes-Benz, inadvertently became the first significant player at a car manufacturer to take a position on the Trolley Problem. According to von Hugo, the Self-Driving Car should run over the nun and the children.
Here’s his statement from the Paris Auto Show, as quoted in Car and Driver:
“If you know you can save at least one person, at least save that one. Save the one in the car. If all you know for sure is that one death can be prevented, then that’s your first priority.”
To be clear, this is not Mercedes’s official position on the Trolley Problem. In fact, M-B’s parent company, Daimler, issued a statement that walked von Hugo all the way back to, presumably, a dark woodshed in Stuttgart. But if that were its position, it would make perfect sense coming from the company that has been making arguably the safest and most popular luxury sedan in the world since the 1950s (that would be the Mercedes-Benz S-Class). It would also answer the obvious yet unspoken question in the minds of everyone for whom the S-Class is the gold standard: In an autonomous future, will the S-Class (and its competitors) protect its passengers, or sacrifice them for some idea of a greater good?
Mercedes’s survival absolutely depends on protecting its passengers above all others, because no one will get in an S-Class that doesn’t. As for the consequences, well, let the insurance companies figure it out, just as they do with human drivers today. Autonomous automotive altruism only has one outcome: dead brands.
Mr. von Hugo’s position was a brave one, because sometimes educating consumers is more difficult than developing products for them. But, once that statement is out there, how does one convince a clickbait-driven media that Mercedes aren’t killer cars? On the other hand, how to appeal to first-world luxury customers who use high-ticket purchases to advertise their moral superiority?
His statement also highlighted a truth painful to armchair critics: in the real world, there is no Trolley Problem. There never was one.
For more clarity, we turn to science fiction. The absurdity of the Trolley “problem” is best explained in the 2009 Star Trek reboot. Young Captain Kirk, faced with an unwinnable training simulation called the Kobayashi Maru, wins by hacking the simulation itself.
“I don’t believe in the no-win scenario,” he later explains to Spock.
That most car crashes are mistaken for unforseeable, no-win scenarios is largely a function of the language used to describe them—calling them “accidents,” for example, even though 56 percent of all incidents involve a single vehicle, and many agencies agree that almost all crash incidents, up to 94 percent, stem from driver error somewhere down the line—and the fact that most people consider themselves good if not great drivers, even though they aren’t. Every time we call a car crash an “accident,” it reinforces the idea that the blame lies not with driver error (the most likely scenario) but with forces beyond one’s control. We’re even at a point where drivers will blame the weather, or road conditions—as if those are acts of God rather than factors that a driver must take into account in his approach to the world from behind the wheel. When it comes to car crashes, most of us have been learning the wrong lessons—if we’ve learned anything at all.
But back to the Trolley Problem, or rather the lack of one.
Consider the rarity of Trolley Problems in real life. When was the last time you heard of a human driver forced to choose between the burning hospital and the nuns and orphans—or something with equally clear choices and similarly dire stakes? Let’s suppose such a problem did occur in the real world; in order to choose, the driver would have to understand that he had a choice at all, which means the driver must:
Properly assess the situation (this assumes both very good eyesight and near-instantaneous informational processing)
Know the exact braking distance of the car, factoring in degradation of said distance based on the current conditions of the tires, brake rotors, pads, and fluids
Calculate that braking distance exceeds the distance to either the group or the building
Know the handling characteristics of the car during emergency maneuvers
Calculate that the car cannot avoid either the group or the building
Weigh his options
Make a moral choice
There’s no guarantee that Lewis Hamilton or Ken Block, let alone even a very well-trained civilian, would be able to make that choice in real time. In the real world, whether the nuns and kids go splat or the driver tries to pull a Hey Kool-Aid! through a flaming brick wall is immaterial, because the “choice” will probably not have been a choice at all, but simply a panicked reaction based on instinct.
From behind the wheel, there is no Trolley Problem; it remains a thought experiment. An autonomous car could in fact make the necessary calculations in time, and then execute a pre-determined decision based on an if-then program. But it’s still a moot point, because there’s only one decision to be made: sorry, sister, and sorry, kiddos.
Despite potential software glitches and developer blind spots, self-driving cars will be “better” drivers than human beings by virtue of the fact that they cannot make stupid choices; the software can’t get drunk, or choose to text or read a movie while driving, or go 36 hours without sleep and then get behind the wheel. This is why some experts predict that a 100 percent penetration of self-driving cars would eliminate 90 percent of car crashes. It’s hard to argue that this scenario isn’t a Good Thing. But the only way to get there is with overwhelming market penetration of self-driving cars, and the only way for that to happen is to make those vehicles protective of their occupants in every scenario, without exception, because the market won’t support them otherwise.
Humans will still want to drive, of course, especially in these early stages, and so there will exist a timeframe that we can call the Trolley Gap—the space where the predictable (self-driving cars) and unpredictable (humans) meet. As a species, we don’t like options taken away from us, so mandating the use of self-driving cars is a hard sell; however, if autonomous cars can set a certain, provable safety standard, it might make sense that the licensing requirements would therefore require a human driver to prove a similar competency—which raises the safety bar for human-driven cars, as well.
In time, a confluence of technologies, laws, and conditions could allow self-driving cars to avoid the Trolley Problem altogether. Take the same Trolley Problem scenario, but add connected cars and infrastructure-to-vehicle communication, and suddenly the car (and occupant) are warned of the burning building before turning the corner, and the car slows down or stops altogether—by itself if the human doesn’t make the necessary adjustments. We might actually see a massive, game-changing reduction of traffic-related fatalities in our lifetime. It’s reason to be optimistic.
Just a thought, but one that’s worth the current autonomous experiments.