In the field of autonomous driving, 3D object detection plays a crucial role as a key perception module. Radar-vision fusion based object detection refers to the technology of combining radar data and visual data to detect and recognize targets. However, in practical situations, visual data may encounter a series of problems during the data collection process, such as limited field of view, lighting variations, motion blur, etc. To address these issues, in addition to the commonly used techniques for onboard cameras, such as dynamic exposure control and low-light enhancement, this paper proposes a novel radar-vision fusion based object detection framework based on CenterFusion. The framework focuses on the case of dealing with abnormal visual data, and aims to achieve more reliable target detection under various complex environmental conditions by fully exploiting the complementary nature of radar and visual data, and introducing a point cloud feature extraction module and a modal attention mechanism. Finally, comparative experiments are conducted on the nuScenes-mini under different conditions, and the experimental results show that the method proposed in this paper can completely replace CenterFusion in various situations, demonstrating excellent performance.
Jun-Seong KimWan-Hee ParkPyoung-Hwa YoonByung‐Sung KimReem Song
Jinlin XueBowen FanYan JiaShuxian DongQishuo Ding
Jinlin XueBowen FanYan JiaShuxian DongQishuo Ding
Lindong WangHongya TuoYu YuanHenry LeungZhongliang Jing