Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field

1HKUST(GZ), 2HKUST 3NIO Autonomous Driving


This work explores the use of 3D generative models to synthesize training data for 3D vision tasks. The key requirements of the generative models are that the generated data should be photorealistic to match the real-world scenarios, and the corresponding 3D attributes should be aligned with given sampling labels. However, we find that the recent NeRF-based 3D GANs hardly meet the above requirements due to their designed generation pipeline and the lack of explicit 3D supervision.

In this work, we propose Lift3D, an inverted 2D-to-3D generation framework to achieve the data generation objectives. Lift3D has several merits compared to prior methods: (1) Unlike previous 3D GANs that the output resolution is fixed after training, Lift3D can generalize to any camera intrinsic with higher resolution and photorealistic output. (2) By lifting well-disentangled 2D GAN to 3D object NeRF, Lift3D provides explicit 3D information of generated objects, thus offering accurate 3D annotations for downstream tasks.

We evaluate the effectiveness of our framework by augmenting autonomous driving datasets. Experimental results demonstrate that our data generation framework can effectively improve the performance of 3D object detectors.


By distilling 3D knowledge from a well-trained 2D GAN, Lift3D enables training data generation and providing photorealistic synthesis and precise 3D annotation.


We show generation result of Lift3D. The images are composited by our generated objects and original background.

Relevent work

There are some excellent work on driving scene simulation introduced recently. If you are interested in this direction, please also check them out.

Drive-3DAug first collect assets using NeRF, then augment these objects into existing scenes.

urbanGIRAFFE uses a coarse 3D panoptic prior to guide a 3D-aware generative model.

DiscoScene uses object-level representation to disentangle objects and background generation.

CADSim create large scale and steerable assets by CAD guidance. I love this work!

GINA-3D generate neural assets in the wild with long-tail instances.

neuralsim. Towards close loop evaluation of self driving car. Train your self driving car in metaverse!

FEGR. Research work of NVIDIA Neural DriveSim. Intrinsic decomposition of urban scenes. A great system work.


	author = {Leheng Li and Qing Lian and Luozhou Wang and Ningning Ma and Ying-Cong Chen}, 
	title = {Lift3D: Synthesize 3D Training Data by Lifting 2D GAN to 3D Generative Radiance Field}, 
	booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)}, 
	year = {2023},