Self-driving Cars and the Trolley Dilemma

Photo by Bram Van Oost on Unsplash

The rave of self-driving cars and Artificial Intelligence being on this new frontier needs no introduction. But what is the Trolley Dilemma? I’ve had the blessing (or curse at times) of having too much time to contemplate on the Trolley Dilemma because of my exposure to High School Debating but I will try to give the ideal rundown to the best of my ability.

By !Original: McGeddonVector: Zapyon — Own work based on: Trolley problem.png by McGeddon This SVG diagram includes elements that have been taken or adapted from this icon: BSicon TRAM1.svg (by BjørnN). This SVG diagram includes elements that have been taken or adapted from this diagram: Rozjazd pojedynczy.svg (by Orem). This SVG diagram includes elements that have been taken or adapted from this icon: Person icon BLACK-01.svg (by MCruz (WMF))., CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=67107784

The most basic version of the dilemma, also known as “Bystander at the Switch” or “Switch”, goes thus:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

1. Do nothing and allow the trolley to kill the five people on the main track.

2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?

There might be two reactions depending on whether you’ve been exposed to this dilemma before and the responses should be one of the following “Wow, that took a dark turn, I only showed up here to know how Self-driving cars and Trolleys are related” or “Okay I know what the Trolley Dilemma is but how does this relate to Self-Driving Cars?”

For that we need to first take self-driving cars out of the equation first and take the Trolley Dilemma head on to understand the breadth of this issue, you can do nothing and allow five individuals to die or make an active choice to kill one person to save five more, what decision you come to is based on your moral compass and it can be psychologically taxing to come up with a justification but the easiest way to go about it is in a Utilitarianism Perspective (Utilitarianism; Utility is King) and say:

“The only thing necessary for the triumph of evil is for good men to do nothing” so I will make the choice to save five lives at the cost of one. I don’t know any of them personally so More == Better

The Trolley Dilemma is often criticized for being an unrealistic situation (among other criticisms, click on link below to read more)

There might be just the case: Self-Driving Cars

Photo by: Edmond Awad et al., 2018

The pitch is quite similar to the Trolley Dilemma but you’re not a bystander anymore. You’re the person training the AI Engine that will make the split second choice. Speaking of choices they are the following;

  1. Continue Driving, prioritizing your passengers safety
  2. Swerve into the barrier saving the pedestrians but killing your passengers.

Phew. Not the greatest options here but you can try the classic “Utility is King” Argument, if there’s more pedestrians then we make the decision to save them, if not we save the passengers.

But what if its a car with a single driver and a child just ran out to the street chasing a wayward ball? One life in comparison to one life?

Does the Company program its AI to protect the passenger because that was their customer? Or do they hope to take the cars boasted safety features for a spin by swerving into the barrier, hopefully not killing their client? There might be arguments for and against each side, depending on where your moral compass is pointing towards.

My choice (at least as a tech enthusiast and novice coder), don’t program it, its a lose-lose case in all cases. Either choice is right, but is also wrong. Program the car to honk twice, and alert the driver to take over the wheel and push the burden of making that choice onto the driver, they’ve been doing it since the Model T has been accessible to the masses, they’ll continue to do so afterwards.

Here’s a TLDR on how well AI Cars will handle this dilemma:

--

--

--

Tech Enthusiast, Noob Coder, Avid Learner

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Is Artificial Intelligence is Dangerous?

Shake It Up

The Impact of Artificial Intelligence on Marketing

4 Real World Use Cases of Robotic Process Automation (RPA) in the Pharmaceutical Industry

UX Design for AI: AI specific issues while user testing

Collective Intelligence Labs launches $200,000 grant fund

Can we stop calling it Predictive Maintenance?

AI2 at ACL 2022

The logo of the ACL 2022 conference with a stylized clover leaf and the conference dates of 22nd–27th May, 2022, noting it’s the 60th meeting, and indicating the location is Dublin, Ireland.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
SahasRenuja

SahasRenuja

Tech Enthusiast, Noob Coder, Avid Learner

More from Medium

The Age of the Videogame

Artificial Intelligence is not the devil you are made to believe

Why AI won’t follow the laws of Robotics and how we can fix it. Hopefully…..

Grow up, Elon Musk: our future is not in space