Software issue caused self-driving car accident that killed pedestrian
Sources in the investigation of an accident where a self-driving car hit and killed a pedestrian in Arizona in March now say that the car’s programming was at fault.
According to two anonymous sources who talked to Efrati, Uber’s sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a “false positive” and decided it didn’t need to stop for her.
Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.
Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren’t there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that’s what happened in Tempe in March—and unfortunately the “real object” was a human being.
I honestly do not understand the need for self-driving cars. In the end, I simply cannot see the software ever being capable of handling all the variables created by the presence of the unpredictable life that will surround it. And should it get that good, I wonder if we will then regret it.
Sources in the investigation of an accident where a self-driving car hit and killed a pedestrian in Arizona in March now say that the car’s programming was at fault.
According to two anonymous sources who talked to Efrati, Uber’s sensors did, in fact, detect Herzberg as she crossed the street with her bicycle. Unfortunately, the software classified her as a “false positive” and decided it didn’t need to stop for her.
Distinguishing between real objects and illusory ones is one of the most basic challenges of developing self-driving car software. Software needs to detect objects like cars, pedestrians, and large rocks in its path and stop or swerve to avoid them. However, there may be other objects—like a plastic bag in the road or a trash can on the sidewalk—that a car can safely ignore. Sensor anomalies may also cause software to detect apparent objects where no objects actually exist.
Software designers face a basic tradeoff here. If the software is programmed to be too cautious, the ride will be slow and jerky, as the car constantly slows down for objects that pose no threat to the car or aren’t there at all. Tuning the software in the opposite direction will produce a smooth ride most of the time—but at the risk that the software will occasionally ignore a real object. According to Efrati, that’s what happened in Tempe in March—and unfortunately the “real object” was a human being.
I honestly do not understand the need for self-driving cars. In the end, I simply cannot see the software ever being capable of handling all the variables created by the presence of the unpredictable life that will surround it. And should it get that good, I wonder if we will then regret it.