With the wide development of AI, questions of machine ethics arise in many contexts, for example in self-driving cars. A human driver might brake hard to save a pedestrian, while subjecting the passengers to a risk in a move that is decided by a moral value. A self-driving car will have to make such judgements based on encoded machine ethics. A machine ethics survey of 2.3 million people, published in Nature, indicates that deciding on a universal rule in such matters would be difficult. The survey presented13 scenarios in which someone’s death was inevitable, and the people surveyed had to choose whom to spare. The data showed there were no universal rules, and therefore it is impossible to come up with a perfect set of rules for robots.
Sign up to receive our newsletter in your inbox every day!
Please enter a valid email address.
Our existing notification subscribers need to choose this option to keep getting the alerts.