Self-driving Cars and the Trolley Dilemma

The rave of self-driving cars and Artificial Intelligence being on this new frontier needs no introduction. But what is the Trolley Dilemma? I’ve had the blessing (or curse at times) of having too much time to contemplate on the Trolley Dilemma because of my exposure to High School Debating but I will try to give the ideal rundown to the best of my ability.

The most basic version of the dilemma, also known as “Bystander at the Switch” or “Switch”, goes thus:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:
1. Do nothing and allow the trolley to kill the five people on the main track.
2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option? Or, more simply: What is the right thing to do?
There might be two reactions depending on whether you’ve been exposed to this dilemma before and the responses should be one of the following “Wow, that took a dark turn, I only showed up here to know how Self-driving cars and Trolleys are related” or “Okay I know what the Trolley Dilemma is but how does this relate to Self-Driving Cars?”
For that we need to first take self-driving cars out of the equation first and take the Trolley Dilemma head on to understand the breadth of this issue, you can do nothing and allow five individuals to die or make an active choice to kill one person to save five more, what decision you come to is based on your moral compass and it can be psychologically taxing to come up with a justification but the easiest way to go about it is in a Utilitarianism Perspective (Utilitarianism; Utility is King) and say:
“The only thing necessary for the triumph of evil is for good men to do nothing” so I will make the choice to save five lives at the cost of one. I don’t know any of them personally so More == Better
The Trolley Dilemma is often criticized for being an unrealistic situation (among other criticisms, click on link below to read more)
There might be just the case: Self-Driving Cars

The pitch is quite similar to the Trolley Dilemma but you’re not a bystander anymore. You’re the person training the AI Engine that will make the split second choice. Speaking of choices they are the following;
- Continue Driving, prioritizing your passengers safety
- Swerve into the barrier saving the pedestrians but killing your passengers.
Phew. Not the greatest options here but you can try the classic “Utility is King” Argument, if there’s more pedestrians then we make the decision to save them, if not we save the passengers.
But what if its a car with a single driver and a child just ran out to the street chasing a wayward ball? One life in comparison to one life?
Does the Company program its AI to protect the passenger because that was their customer? Or do they hope to take the cars boasted safety features for a spin by swerving into the barrier, hopefully not killing their client? There might be arguments for and against each side, depending on where your moral compass is pointing towards.
My choice (at least as a tech enthusiast and novice coder), don’t program it, its a lose-lose case in all cases. Either choice is right, but is also wrong. Program the car to honk twice, and alert the driver to take over the wheel and push the burden of making that choice onto the driver, they’ve been doing it since the Model T has been accessible to the masses, they’ll continue to do so afterwards.
Here’s a TLDR on how well AI Cars will handle this dilemma:
