Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF

1HKUST(GZ), 2HKUST

arxiv preprint

Generate transferable 3D adversarial examples using NeRF.


Abstract

Deep neural networks (DNNs) have been proven extremely susceptible to adversarial examples, which raises special safety-critical concerns for DNN-based autonomous driving stacks (i.e., 3D object detection). Although there are extensive works on image-level attacks, most are restricted to 2D pixel spaces, and such attacks are not always physically realistic in our 3D world. Here we present Adv3D, the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs). Advances in NeRF provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example.

We train our adversarial NeRF by minimizing the surrounding objects' confidence predicted by 3D detectors on the training set. Then we evaluate Adv3D on the unseen validation set and show that it can cause a large performance reduction when rendering NeRF in any sampled pose. To generate physically realizable adversarial examples, we propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture. Experimental results demonstrate that the trained adversarial NeRF generalizes well to different poses, scenes, and 3D detectors. Finally, we provide a defense method to our attacks that involves adversarial training through data augmentation.


Renderings Results

Adversarial Results

Comparison of different detectors under our attack.


Real world experiment

Illustration Video

BibTeX

@article{li2023adv3d, 
	author = {Leheng Li and Qing Lian and Ying-Cong Chen}, 
	title = {Adv3D: Generating 3D Adversarial Examples in Driving Scenarios with NeRF}, 
        journal={arXiv preprint},
	year = {2023}, 
}