Hackers can easily trick self-driving cars into thinking that another car, a wall or a person is in front of them, potentially paralysing it or forcing it to take evasive action.
Automated cars use laser ranging systems, known as lidar, to image the world around them and allow their computer systems to identify and track objects. But a tool similar to a laser pointer and costing less than $60 can be used to confuse lidar...
The following appeared in the IEEE Spectrum story Researcher Hacks Self-driving Car Sensors.
Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles...
Petit acknowledges that his attacks are currently limited to one specific unit but says, “The point of my work is not to say that IBEO has a poor product. I don’t think any of the lidar manufacturers have thought about this or tried this.”
I had the following reactions to these stories.
First, it's entirely possible that self-driving car manufacturers know about this attack model. They might have decided that it's worth producing cars despite the technical vulnerability. For example, there is no defense in WiFi for jamming the RF spectrum. There are also non-RF jamming methods to disrupt WiFi, as detailed here. Nevertheless, WiFi is everywhere, but lives usually don't depend on it.
Second, researcher Jonathan Petit appears to have tested an IBEO Lux lidar unit and not a real self-driving car. We don't know, from the Guardian or IEEE Spectrum articles at least, how a Google self-driving car would handle this attack. Perhaps the vendors have already compensated for it.
Third, these articles may undermine one of the presumed benefits of self-driving cars: that they are supposed to be safer than human drivers. If self-driving car technology is vulnerable to an attack not found in driver-controlled cars, that is a problem.
Fourth, does this attack mean that driver-controlled cars with similar technology are also vulnerable, or will be? Are there corresponding attacks for systems that detect obstacles on the road and trigger the brakes before the driver can physically respond?
Last, these articles demonstrate the differences between safety and security. Safety, in general, is a discipline designed to improve the well-being of people facing natural, environmental, mindless threats. Security, in contrast, is designed to counter intelligent, adaptive adversaries. I am predisposed to believe that self-driving car manufacturers have focused on the safety aspects of their products far more than the security aspects. It's time to address that imbalance.
Tweet
Copyright 2003-2015 Richard Bejtlich and TaoSecurity (taosecurity.blogspot.com and www.taosecurity.com)
from TaoSecurity http://ift.tt/1KAgQQ0
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.