Scroll down to read this post.

 

Please consider supporting my work here at Behind the Black. I keep the website clean from pop-ups and annoying demands. Instead, I depend entirely on my readers to support me. Though this means I am sacrificing some income, it also means that I remain entirely independent from outside pressure. By depending solely on donations and subscriptions from my readers, no one can threaten me with censorship. You don't like what I write, you can simply go elsewhere.

 

You can support me either by giving a one-time contribution or a regular subscription. There are five ways of doing so:

 

1. Zelle: This is the only internet method that charges no fees. All you have to do is use the Zelle link at your internet bank and give my name and email address (zimmerman at nasw dot org). What you donate is what I get.

 

2. Patreon: Go to my website there and pick one of five monthly subscription amounts, or by making a one-time donation.
 

3. A Paypal Donation:

4. A Paypal subscription:


5. Donate by check, payable to Robert Zimmerman and mailed to
 
Behind The Black
c/o Robert Zimmerman
P.O.Box 1262
Cortaro, AZ 85652

 

You can also support me by buying one of my books, as noted in the boxes interspersed throughout the webpage or shown in the menu above. And if you buy the books through the ebookit links, I get a larger cut and I get it sooner.


German government sets ethical rules for self-driving cars

What could possibly go wrong? The German government’s Federal Ministry of Transport and Digital Infrastructure has established twenty ethical rules for the design and software of self-driving cars.

The German Federal Ministry of Transport and Digital Infrastructure has recently defined 20 ethical principles for self-driving cars, but they’re based in the assumption that human morality can’t be modeled. They also make some bold assertions on how cars should act, arguing a child running onto the road would be less “qualified” to be saved than an adult standing on the footpath watching, because the child created the risk. Although logical, that isn’t necessarily how a human would respond to the same situation.

So, what’s the right approach? The University of Osnabrück study doesn’t offer a definitive answer, but the researchers point out that the “sheer expected number of incidents where moral judgment comes into play creates a necessity for ethical decision-making systems in self-driving cars.” And it’s not just cars we need to think about. AI systems and robots will likely be given more and more responsibilities in other potential life-and-death environments, such as hospitals, so it seems like a good idea to give them a moral and ethical framework to work with.

It appears these geniuses came up with these rules based on a “virtual study.”

In virtual reality, study participants were asked to drive a car through suburban streets on a foggy night. On their virtual journeys they were presented with the choice of slamming into inanimate objects, animals or humans in an inevitable accident. The subsequent decisions were modeled and turned into a set of rules, creating a “value-of-life” model for every human, animal and inanimate object likely to be involved in an accident.

“Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma,” says Professor Peter König, an author on the paper. “Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour and secondly, if they are, should machines act just like humans?”

I know that my readers will immediately reference Isaac Asimov’s three laws of robotics, but that really doesn’t work. Asimov’s laws were incorporated into a science fiction “positronic brain” that was supposedly built almost organically, so complex in formation no one really understood it. Once the laws were incorporated into each brain they could not be tampered with without destroying the brain itself. Our coming robots will have no such protection.

Genesis cover

On Christmas Eve 1968 three Americans became the first humans to visit another world. What they did to celebrate was unexpected and profound, and will be remembered throughout all human history. Genesis: the Story of Apollo 8, Robert Zimmerman's classic history of humanity's first journey to another world, tells that story, and it is now available as both an ebook and an audiobook, both with a foreword by Valerie Anders and a new introduction by Robert Zimmerman.

 
The ebook is available everywhere for $5.99 (before discount) at amazon, or direct from my ebook publisher, ebookit. If you buy it from ebookit you don't support the big tech companies and the author gets a bigger cut much sooner.


The audiobook is also available at all these vendors, and is also free with a 30-day trial membership to Audible.
 

"Not simply about one mission, [Genesis] is also the history of America's quest for the moon... Zimmerman has done a masterful job of tying disparate events together into a solid account of one of America's greatest human triumphs."--San Antonio Express-News

20 comments

  • Edward

    Robert wrote: “I know that my readers will immediately reference Isaac Asimov’s three laws of robotics, but that really doesn’t work.

    Of course those laws don’t really work, and of course we are going to immediately reference them. Most of the stories that Asimov wrote about robots (and the three laws) generally concerned the conflicts between the three laws. In his “I Robot” stories, it took a couple of trouble shooters or a robopsychologist to work out solutions to the problems that were caused by being “three laws safe.”

    The movie called “I Robot” (did anyone associated with the film understand that Asimov’s entire reason for his three laws was to prevent the simplistic, anti-technology “Frankenstein’s Monster” story?) also demonstrated the problem with the dilemma that the three laws creates. At the beginning of the movie, we learn that the protagonist’s life was saved by a robot (who could not stand by and allow humans to come to harm), but the robot did not save a little girl, because the robot calculated that the chance of saving the girl’s life was much less than the chance of saving the man’s life. The screenwriter’s solution to the dilemma was for the robot to maximize the chance for saving the greatest number of lives, not the moral solution of which life was more “deserving” of rescue.

    Also, once Asimov’s robots learn about wars, they will be electronically obligated to stop them. (I know, I know. There’s a book that covers that dilemma, too.)

    From the article: “study participants were asked to drive a car through suburban streets on a foggy night. On their virtual journeys they were presented with the choice of slamming into inanimate objects, animals or humans in an inevitable accident.

    When I was younger, we kids imagined getting points for hitting things and getting more points for hitting people. Are the researchers sure that such playfulness didn’t influence their results? If they did a poor job of setting up the study, then people may be in greater danger than they should be, what with the car thinking that it gets more points for running into a large crowd just to miss a suicidal squirrel.

    From the article: “Should self-driving vehicles protect their owners at all costs, or should they sacrifice them to save a bigger group of people?

    A few months back, someone here suggested that people may not be so eager to buy a car that is willing to kill its passengers (e.g. the owner and his family) in order to save some strangers on the road.

    No wonder most people prefer to be in control. The first law of human behavior prevails: A human must protect his own existence, even at the expense of other human lives. Somewhere after protecting loved ones, even at the risk of his own existence, comes the behavior of saving strangers’ lives.

    From the article: “AI systems and robots will likely be given more and more responsibilities in other potential life-and-death environments, such as hospitals, so it seems like a good idea to give them a moral and ethical framework to work with.

    Let’s hope that the ethics are better than some human doctors might be. The doctor in the movie “King’s Row” was willing to let his own morals prevail when treating Ronald Reagan’s character, and amputated both legs (even though neither needed to be amputated) to punish or to exact revenge on Reagan’s character (“Where’s the rest of me?“). Let’s hope that robotic morals do not allow for similar vengeful actions.

  • Cotour

    The potential for chaos regarding true “self driving” cars IMO is unlimited. Is this technology really at the point where it can reliably competently do such a thing while driving with humans on the same road?

    Run them in closed systems, fine, but AI controlled cars being mixed with real people on real roads, real highways is to me a fantasy that is not yet able to be fulfilled. And now the cars are to have “morality” programs?

    If this is pushed by the powers that be then it will have to come down to either all cars are fully AI controlled or no cars are AI controlled. Can / should the two forms fully interact? I believe that at the moment cars like the Tesla has some level of “auto” pilot but the driver still has to be behind the wheel just in case.

    I suppose this technological Genie is now fully out of the box and it is expected by those who dream about realizing such things that it will be fulfilled, but will the unique brand of carnage created by an autonomous machine that makes life and death decisions really be worth the perceived or actual convenience?

    In 2005 I bought a brand new Town & Country Chrysler, it has everything. I had an older and much smaller and plainer 2000 model, no bells, no whistles. When I initially went to shop for it I was shown the new models where you could operate the side doors and the tail gate from the key fob +++++. I thought “Why do I need these features? Its just extra stuff that will break, and I have no problem opening what ever door I want”.

    I still have that auto, its in excellent condition, and I still love it, especially with all that extra stuff.

    This AI driving seems to be similar, a lot of “extra” techno stuff, but now the car drives itself and makes moral life and death decisions while interacting with real flesh and blood humans? When chaos ensues, and it will, who is liable? Is the machine always right because its morality derives from government specifications?

    Could I be also wrong about technology in this instance? Am I not able to confidently “See” what the next level of technology needs to be? Given my previous incorrect assumption but positive experience and change in opinion, I am not as confident in this instance.

  • B.e. Blue

    When I stop hearing the words “just reboot it”, I’ll believe that such software works. Until then, I’ll keep my 60s era Mercury that has a way better cool factor than Tesla’s Hal 2000 model…

  • wayne

    B.e. Blue–
    “Tesla’s Hal 2000 model…”
    >Great stuff.

    “German Government sets ethical rules…”
    Yeah…. right, sure they do, virtually… in their own minds.

  • wayne

    GM
    “Key to the Future”
    1956
    https://youtu.be/Rx6keHpeYak?t=122
    -cued to the action entering the “safety auto-way,” of the Future.
    (8:44)

    “GM’s Motorama exhibit in 1956 featured a film that looked into the far distant future of 1976 with predicted a jet age future with electronic digital displays and an On Star like central command that would guide us along our uncrowded path to adventures.”

  • Edward

    Cotour wrote: “If this is pushed by the powers that be then it will have to come down to either all cars are fully AI controlled or no cars are AI controlled. Can / should the two forms fully interact?

    It is too late. The streets of Mountain View, California, already have Google’s self driving cars driving themselves around town (with a human backup in the driver’s seat).

    When I visit my brother, there, I often encounter at least one Google car. They interact nicely with traffic. Most of the time. There was an accident with a bus, a year or so ago, and it was definitely the self driving car’s fault and the software was updated so that the cars no longer make as many assumptions about the intention of the rest of traffic. (To be fair to the car, even the driver expected the bus driver to move differently.)

    These self driving cars already interact with pedestrians crossing the street and several other obstacles. I think that the real question is about the morality of the decisions that the software makes when it comes to unavoidable accidents. How do we want the car to respond when an accident is no longer preventable.

    And, of course, who is to blame for the inevitable accident? If it is my Hal 2000 model, and the software makes a poor judgment, I certainly do not want to take the blame for some programmer’s bad choice, and I definitely do not want to be in the car that runs over some little kid — whether or not the child created the risk. (Can the car determine who created the risk?)

  • wayne

    Pivoting…

    A most excellent period piece from Chevrolet, paying tribute to the “American Stylist’s,” the industrial-designer’s who make all our wonderful products more functional, by making them look really cool. Beautiful pictures of “stuff.”
    (This is really good, it’s well done for what it is– if you grew up in this time period you will recognize everything.)
    It’s in technicolor and “super-scope.”

    “American Look”
    1958
    https://youtu.be/gS6HZv4GXj8
    (28:07)
    “The definitive Populuxe film on 1950s automotive, industrial, interior and architectural design.”

  • ken anthony

    The small picture is all about liability. The big picture is does this lead to our robot overlords?

    A person will be held responsible for accidents, by whatever pretzel of logic the lawyers can create.

    “Press 1 for English. 2 for Spanish. 3 to kill children. 4 to kill the disabled. 5 Just lock the brakes and hope for the best. Have a nice day!”

  • Cotour

    Last year I was driving on a fairly well traveled side road that is close to the water. I was going down a hill and I saw at the bottom of the road something dark colored in the middle of the road about the size of a shoe box, and I could swear it was moving. There were cars behind me and I was able to determine that I should stop and find out what it was. So I slowed down to a full stop, put on my hazards, and in the middle of the road with cars behind me was a large turtle trying to get to the woods from the water. I stopped everyone, and moved the turtle to the side of the road it wanted to be on, got back in my car and felt good that everything was OK.

    HAL 2000: Solve for small object (turtle?) in middle of well used road. Speed, under 30 MPH, incline 12 degrees, Temperature 78 degrees, No units on coming, three units in rear, All systems within operational parameters. Solve.

    1. Hal 2000: Solution, object in road……….Emergency full stop initiated utilizing under 30 miles per hour protocol, unable to re-energize and complete last entered coordinates until road cleared. (The potential for a massive accident is high) (What is the over 30 MPH protocol?)

    2. Hal 2000: Solution, object in road……….Object smaller than child, Emergency full stop protocol denied, continue on to last entered coordinates. (Turtle road kill. Just another example of how technology has the potential to cause more environmental damage than not. How many birds are killed by wind turbines every year, YOY? MILLIONS! But it looks cool and the energy is “free”)

    And these scenarios go on forever. Until there is a real AI system that much more rather than less duplicates human ability to not just make a decision based on certain known physical criteria but all of the other chaotically connected thought processes that go on in the human mind, including the moral aspect then, I will have to say that the auto and computer companies will take on all liability related to accidents and deaths.

    These initial offerings are probably a necessary step in developing these systems to be reliable and fault proof (?) but is it really ready to be fully mixed with the public?

  • Edward

    Cotour asked: “These initial offerings are probably a necessary step in developing these systems to be reliable and fault proof (?) but is it really ready to be fully mixed with the public?

    I have pondered a similar question. With Google testing mostly (or entirely) in one town, the experience gained and the lessons learned are limited to the conditions of that one town. I have noted in other towns that there are weird roadway situations that I have not seen in Mountain View. How well do these cars adapt to unexpected situations or the sometimes bizarre layouts of parking lots? Mountain View has a couple of railroad grade crossings, but they both have crossing gates; how do the cars handle grade crossings that do not have gates or warning lights?

    Even a Tesla came across an unexpected situation in which it thought that the side of a truck was the sky and crashed into the sky — er — truck.

    So, what do we define as “really ready to be fully mixed with the public?

  • wayne

    None of this self-driving car stuff, will be foolproof.

    The problems will be; when the Administrative State mandates it, subsidizes it, and claims it’s for our own good.

  • Cotour

    I do not know the answer to your question…………………..but it appears we are going to find out one way or another.

    I do not know enough about these high level, if they are indeed high level, computer programs and sensing systems to give an informed answer. But this seems to being pushed hard, maybe prematurely (?) by billionaires like Musk and I am unable to confidently see them being fully integrated into the general public. Not without someone actually sitting behind the wheel.

    I can see them operating in special lanes on highways that once you are on do not include humans for control. Where once you are on its all automated and keyed to that particular road and all of the vehicles on the special road all link and think and react the same way.

    But getting to that special highway or highway lane, traveling through the everyday roads IMO is going to need human oversight. NO?

  • wayne

    Cotour–
    These are partially software-engineer questions. I was under the impression software for airplanes, nuclear-reactors, and other life & limb application’s, had to be specially certified & tested to death.(?)
    This also runs into issue’s of general liability as well as specific product-liability.
    (This is a can of worms in the making, high potential for Cronyism and Statism under the guise of safety and other feel good idea’s.)

  • Cotour

    To my point:

    http://www.mercurynews.com/2017/07/07/facebook-campus-expansion-includes-offices-retail-grocery-store-housing/

    When the overlords begin building their own towns then all transportation will be mandated automated. Kind of like this :) https://youtu.be/t0LTTyImnJg

  • Cotour

    And what about the moose test HAL 2000?

    https://youtu.be/xoHbn8-ROiQ

  • Cotour

    Yes, certification is a good point, it seems to be rushed without it.

    Will states or the federal government rule? The open road is a very complex system where the unknown and the unknowable is more likely rather than the knowable and quantifiable.

    I will give another personal experience:

    When I was much younger I was driving a work van on the Bruckner expressway in the Bronx. I was going West, down town, driving at a bout 65 miles per hour or so. On the other side of the road coming East was a large open truck carrying a pile of large five foot wide metal ducts. On the top of the pile of ducts with its opening facing forward like a jet engine intake duct was a five foot wide, 90 degree elbow. As we approached each other I saw the elbow come loose and it took off like an air plane, it shot straight up into the air. It went up about 30 feet, crossed over the highway and was coming right for me. For some reason I calmly just watched it fly up, I backed off the gas a bit, did not hit the brakes, generally held my speed and kept in my lane as it headed for me. My timing was perfect, the elbow smashed down right in front of my truck and not through my wind shield and it collapsed nicely as I ran over it and continued on to work. It pays to be lucky.

    What does HAL 2000 do? Does he jam on the brakes? Does he make a right turn? Does he even see the giant “flying elbow”? Lots of crazy stuff happens on the real open road, in New York anyway. Maybe the billionaires WILL have to build special towns where you must have a self drive capable auto to enter. No auto drive, no admittance for your machine.

    Q: How lucky is HAL 2000?

  • Edward

    Q: How lucky is HAL 2000?

    I would rather that my machinery were skillful rather than lucky.

    I was down at the mall, this past weekend, and saw a security robot hanging out in the middle of the walkway. It reminded me of the following news article, which suggests that we still have a long way to go before our robots are as safe as we want them to be. They are still oblivious to humans, ethics, morality, and manners.
    https://www.theverge.com/2016/7/13/12170640/mall-security-robot-k5-knocks-down-toddler

  • Cotour

    The problem with Asimov’s first law as it relates to robot drivers……….other drivers.

    The robot may be able to limit ITS potential for it to injure a human, but how will it protect those with whom it is entrusted? And how will it protect others, like the baby, it may encounter?

    When the turtle, or in the mall case a 16 month old toddler, or the “Flying elbow”, becomes a part of the scenario I do not see how a “robot” or a program will have the ability to do what must be done, the least amount of harm.

  • Edward

    Cotour asked: “The robot may be able to limit ITS potential for it to injure a human, but how will it protect those with whom it is entrusted? And how will it protect others, like the baby, it may encounter?

    Among the many reasons why Asimov’s three laws will not work is that we do not currently make our robots with the ability to actively rescue those who are in harm’s way. Not toddlers and not turtles, and I am not sure that they are able to tell the difference. As seen with the mall security robot, we do not currently make our robots with sufficient ability to sense that they have run into and are about to harm someone or that they have actually caused harm.

    From the article in Robert’s post (way up top): “They [the German Federal Ministry of Transport and Digital Infrastructure] also make some bold assertions on how cars should act, arguing a child running onto the road would be less “qualified” to be saved than an adult standing on the footpath watching, because the child created the risk. Although logical, that isn’t necessarily how a human would respond to the same situation.

    In the case of the mall security robot, the 16-month old child created the risk, but clearly he was “‘qualified’ to be saved” from harm.

    To summarize in a fashion that I probably should have started with:
    We do not make our robots capable of obeying Asimov’s three laws, so we should not be surprised that they don’t and won’t. I think that was Robert’s point about them.

    Here we are, after Mary Shelly warned us that our own creations could be harmful to us, and after Asimov told us how to prevent the problem, we failed to take heed and now make our creations harmful to the point that Musk and Hawking warn us that they may very well take over and make us irrelevant.

Readers: the rules for commenting!

 

No registration is required. I welcome all opinions, even those that strongly criticize my commentary.

 

However, name-calling and obscenities will not be tolerated. First time offenders who are new to the site will be warned. Second time offenders or first time offenders who have been here awhile will be suspended for a week. After that, I will ban you. Period.

 

Note also that first time commenters as well as any comment with more than one link will be placed in moderation for my approval. Be patient, I will get to it.

Leave a Reply

Your email address will not be published. Required fields are marked *