Self-driving cars has moved from fiction to reality… But it isn’t happening without consequences, which is why a philosophical dicussion has become the focus of a debate among technologists across the world – the ethical dilemma.
The discussion’s focal point is rooted in the question the role of God – who should decide who gets to live or die?: What if, rather than making the decision in the heat of the moment, you were a programmer who had to put your choices into code? And what if, rather than picking between the lives of five people and one person on different roads, you had to pick between the life of the car’s sole occupant, and the lives of five pedestrians? And would you buy a car if you knew it was programmed to swerve into a tree to protect someone who crossed the road without looking?
Andrew Chatham, the Principal Engineer of Google Self-Driving Cars, comments on the dilemma:
“The main thing to keep in mind is that we have yet to encounter one of these problems (…) In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up.”
So, who do you think should play the role of God and decide who gets to live or die in the event that the worst case scenario emerges? An would you, if you were an engineer, make such a decision?