[1/5/17] Computer algorithms that control self-driving cars are already making life-and-death decisions for human beings — so say ethicists and technology experts interviewed by Business Insider.
For example, an autonomous vehicle would decide who lives and who dies if it swerved to avoid a pedestrian but, by doing that, put its on-board passengers in danger. Or, it could keep its passengers safe by running over the pedestrian. The decision would be made by the computers and sensors in the vehicle – and by extension, the computer programmers.
“On one hand, the algorithms that control the car may have an explicit set of rules to make moral tradeoffs,” Iyad Rahwan, a scientist at MIT, told Business Insider. “On the other hand, the decision made by a car in the case of unavoidable harm may emerge from the interaction of various software components, none of which has explicit programming to handle moral tradeoffs.”
Rahwan added, “Every time the car makes a complex maneuver, it is implicitly making trade-off in terms of risks to different parties.”
Google X founder Sebastian Thrun in 2014 said the company’s automated car would hit the smallest object in the road if it could not find a clear path.
“If it happens that there is a situation where the car couldn’t escape, it would go for the smaller thing,” he said.