How could the machine choose between two moral options? The car cannot decide which way it would have to collide such person rather than an other one. We can see the nonsense of this dilemma in the example of the "moral machine" (link below)... So, is the autonomous car going to save young people rather than older, the dogs rather than cats? How will it also know how to distinguish between a policeman and a thief? Engineers should manage situations where the dilemmas go to infinity ... without satisfactory result.
At any rate, the question disturbs us… it's a good subject, thank you Victor Fontaine!
Many people find this question disturbing, because yes, it is. But why?
Because this question makes no sense. Au autonomous car WILL NEVER be in such a situation.
Only humans can imagine that such a situation could occur.
I am still waiting for the proof that the question is relevant before engaging in a real discussion on this topic.