Under the wrong conditions, ‘stop’ could mean ‘go faster’ to an autonomous car.
A new study led by the University of Washington found that the type of computer brain set to be used in autonomous cars could be fooled into thinking a Stop sign is a 45 mph Speed Limit or other sign by adding just a little graffiti.
The researchers peppered one Stop sign with a few black and white blocks, and used stickers to make another read “Love Stops Hate.” In both cases, the deep neural network processing images fed to it by a camera misread the signs most of the time, even though they retained their octagonal shape and red background color.
Another test simulated a slightly faded right turn arrow, which the computer often thought was a Stop sign or added lane sign, but which a human would have no trouble identifying.
The results varied by the position and distance of the camera relative to the signs, and would differ depending on what type of system is being targeted. Nevertheless, the researchers hope the work will help programmers develop better defenses against these types of attacks.
As a backup to this type of situation, driverless cars use several inputs, including hyper accurate GPS mapping, to help them avoid issues caused by confusing signage, but still need to sort out the data to make the correct -- sometimes life or death -- decision.
According to The Sun, the researchers suggest this kind of 'contextual' information is of the utmost importance to the safe operation of autonomous cars.