Yang BinghuiChengyi ZhuSijia XiaHuili FanJianfeng YangJinsheng Xiao
Abstract LiDAR has emerged as a widely used sensor for 3D object detection in autonomous driving. However, the point cloud data generated by LiDAR is typically sparse and irregular, necessitating the adoption of cross-modal detection approaches to enhance detection accuracy and stability. The distinct representations of point clouds and images pose challenges for effective data fusion, often resulting in suboptimal performance. In this study, we introduce a novel multi-modal, density-aware 3D object detection framework, PVNet, which leverages virtual point clouds generated through depth completion to overcome fusion difficulties. To mitigate errors introduced by inaccuracies in depth completion, we propose a Virtual Point Cloud Enhancement Block. This block first extracts features from both the original point cloud and virtual points using sparse convolution, and then combines these features with our centroid shift weighting module, reducing interference from virtual point noise. Additionally, a self-attention mechanism is employed for more effective feature fusion. To fully exploit the complementary information from both LiDAR and camera data, and improve the detection of distant and small objects, we incorporate a multi-scale attention fusion mechanism. Experimental results on the KITTI dataset show that our proposed 3D object detection method outperforms existing approaches.
Qian XiongXuanyi WangChao Jing
Huiying WangHuixin ShenBoyang ZhangWen YuDan Meng
Guotao XieChen Zhi-yuanMing GaoManjiang HuXiaohui Qin
Jinjun RaoKai YangQing‐Tai ZhaoZhenwei LiJinbo ChenJingtao LeiMei LiuWojciech Giernacki