To be safe on the road, say Stanford engineers, the algorithms that run them must sometimes break the law
The technology of self-driving cars is advancing fast. Autonomous vehicles built at Stanford can already drive competitive race laps and navigate roads. Yet making them safe in real-world traffic, with its infinitely shifting variables, is another story. That’s where engineering meets ethics and where Stanford postdoctoral scholar Selina Pan comes in.
Pan, an engineer in the Dynamic Design Lab at Stanford, is working to teach ethics to autonomous vehicles. To be safe on the road, human drivers sometimes intuitively break the law. When a motorist sees an obstacle in the road ahead, for example, she may avoid it by swerving across the double yellow line into the opposing lane. A self-driving car, however, would require an algorithm that overrides instructions to follow traffic laws.
What should the car be programmed to do? It can come to a dead stop before the obstacle. It can veer across the yellow line. Or it can minimize its trespass into the oncoming lane by skirting the obstacle as closely as it can.
If the choice is not clear, engineers are put in the position of ethicists. They must make programming decisions that attempt to literally steer the car through ambiguous straits.
“We need to somehow translate social behavior, ethical behavior, into what happens once the vehicle finally takes full control,” Pan said.
“People often say the technology is solved, but I don’t quite believe that,” mechanical engineering Professor Chris Gerdes, director of the Dynamic Design Lab and the Revs Program at Stanford, told Bloomberg.com. “There’s a lot of context, a lot of subtle, but important things yet to be solved.”
Watch Pan and doctoral candidate Sarah Thornton work on the problem with X-1, a student-built autonomous vehicle in the Revs Program at Stanford.
Watch Gerdes, Pan and others talk about programming ethics in self-driving cars.