Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/16/19 06:03PM
13,212
3,157
Tweets
digiKinesis: RT @it4sec: Adversarial spoofing attack on LiDAR sensors (Baidu Apollo Perception) https://t.co/IWNX6Q5hhc
arxiv_cscv: Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving https://t.co/NNTonstijE
yarai1978: RT @it4sec: Adversarial spoofing attack on LiDAR sensors (Baidu Apollo Perception) https://t.co/IWNX6Q5hhc
it4sec: Adversarial spoofing attack on LiDAR sensors (Baidu Apollo Perception) https://t.co/IWNX6Q5hhc
ak1010: Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving https://t.co/n7Grll4YWd - “One major limitation is that our current results cannot directly demonstrate attack performance and practicality in the real world.” I like the limitation section..
arxiv_cscv: Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving https://t.co/NNTonstijE
StatsPapers: Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving. https://t.co/tiDAwIulwe
arxiv_cscv: Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving https://t.co/NNTonstijE
Images
Related