What could possibly go wrong? The German government’s Federal Ministry of Transport and Digital Infrastructure has established twenty ethical rules for the design and software of self-driving cars.
The German Federal Ministry of Transport and Digital Infrastructure has recently defined 20 ethical principles for self-driving cars, but they’re based in the assumption that human morality can’t be modeled. They also make some bold assertions on how cars should act, arguing a child running onto the road would be less “qualified” to be saved than an adult standing on the footpath watching, because the child created the risk. Although logical, that isn’t necessarily how a human would respond to the same situation.
So, what’s the right approach? The University of Osnabrück study doesn’t offer a definitive answer, but the researchers point out that the “sheer expected number of incidents where moral judgment comes into play creates a necessity for ethical decision-making systems in self-driving cars.” And it’s not just cars we need to think about. AI systems and robots will likely be given more and more responsibilities in other potential life-and-death environments, such as hospitals, so it seems like a good idea to give them a moral and ethical framework to work with.
It appears these geniuses came up with these rules based on a “virtual study.”
In virtual reality, study participants were asked to drive a car through suburban streets on a foggy night. On their virtual journeys they were presented with the choice of slamming into inanimate objects, animals or humans in an inevitable accident. The subsequent decisions were modeled and turned into a set of rules, creating a “value-of-life” model for every human, animal and inanimate object likely to be involved in an accident.
“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” says Professor Peter König, an author on the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour and secondly, if they are, should machines act just like humans?”
I know that my readers will immediately reference Isaac Asimov’s three laws of robotics, but that really doesn’t work. Asimov’s laws were incorporated into a science fiction “positronic brain” that was supposedly built almost organically, so complex in formation no one really understood it. Once the laws were incorporated into each brain they could not be tampered with without destroying the brain itself. Our coming robots will have no such protection.