Yanwei LiXiaojuan QiYukang ChenLiwei WangZeming LiJian SunJiaya Jia
In this work, we present a conceptually simple yet effective framework for cross-modality 3D object detection, named voxel field fusion. The proposed approach aims to maintain cross-modality consistency by representing and fusing augmented image features as a ray in the voxel field. To this end, the learnable sampler is first designed to sample vital features from the image plane that are projected to the voxel grid in a point-to-ray manner, which maintains the consistency in feature representation with spatial context. In addition, ray-wise fusion is conducted to fuse features with the supplemental context in the constructed voxel field. We further develop mixed augmentor to align feature-variant transformations, which bridges the modality gap in data augmentation. The proposed framework is demonstrated to achieve consistent gains in various bench-marks and outperforms previous fusion-based methods on KITTI and nuScenes datasets. Code is made available at https://github.com/dvlab-research/VFF11Part of the work was done in MEGVII Research.. © 2022 IEEE.
Anas MahmoudJordan S. K. HuSteven L. Waslander
Felix NobisEhsan ShafieiPhillip KarleJohannes BetzMarkus Lienkamp
Wei WuYisha LiuWeimin XueYan Zhuang
Yubing LiAnhong WangJing HaoDonghan Bu
Baojie FanKexin ZhangJiandong Tian