Automated & Connected

Self-driving cars and the challenge of designing ethical algorithms

Pacific Standard spoke with a philosopher who's trying to code ethical algorithms into autonomous vehicles.

An article appearing on the Pacific Standard website provides an interesting perspective on the ethical dilemma that self-driving cars will programmatically have to deal with. Jack Denton for the Pacific Standard writes that we are probably familiar with the dilemma, a favorite of silent films and freshman philosophy courses: A train hurtles along a track, its freight cars rattling ominously in the wind. Up ahead, a railroad spur splits the path in two directions—but both routes augur death.

Denton goes on to describe the ethical dilemma in detail:

On one side of the fork, a group of five workers are absorbed in the repetitive labor of track maintenance, apparently unaware of the rapidly approaching locomotive. If the train continues along its current path, they will all be crushed. On the opposite track, a lone, similarly oblivious laborer is performing the same task. He is safe, for now—unless someone were to reroute the train.”

So, what should happen? How do you determine who dies?

In this scenario, writes Denton, it's too late for the brakes to have any effect. The only possible recourse in these waning moments is a railroad switch, altering the train's path from the five-man track to the one-man track, Denton writes.

From an AI standpoint it’s a difficult choice. Doing so would save the five men's lives, Denton writes, but only at the expense of the lone laborer's life. The question: Can you justify killing one person to save five?

This is the case of the classic “Trolley Problem”.

"The Trolley Problem"—as the above situation and its related variations are called—is a mainstay of introductory ethics courses, Denton notes, where it is often used to demonstrate the differences between utilitarian and Kantian moral reasoning.

Utilitarianism (also called consequentialism) judges the moral correctness of an action based solely on its outcome, Denton writes.

According to the article, a utilitarian should switch the tracks. The logic seems reasonable: one dead is better than five, in terms of outcomes.

Interestingly, Denton raises the idea that in Kantian, or rule-based ethics, a set of moral principles must be followed in all situations, regardless of outcome, and this might dictate a different outcome choice than the utilitarian moral code.

A Kantian might not be able to justify switching the track if, say, their moral principles hold actively killing someone to be worse than being a bystander to death. What and how will a self-driving car make this choice? It remains a confounding engineering challenge for mobility engineers.

Read the full article by clicking the link below.

Original Article