JOURNAL ARTICLE

6D Object Pose Estimation Based on Cross-Modality Feature Fusion

Meng JiangLiming ZhangXiaohua WangShuang LiYijie Jiao

Year: 2023 Journal:   Sensors Vol: 23 (19)Pages: 8088-8088   Publisher: Multidisciplinary Digital Publishing Institute

Abstract

The 6D pose estimation using RGBD images plays a pivotal role in robotics applications. At present, after obtaining the RGB and depth modality information, most methods directly concatenate them without considering information interactions. This leads to the low accuracy of 6D pose estimation in occlusion and illumination changes. To solve this problem, we propose a new method to fuse RGB and depth modality features. Our method effectively uses individual information contained within each RGBD image modality and fully integrates cross-modality interactive information. Specifically, we transform depth images into point clouds, applying the PointNet++ network to extract point cloud features; RGB image features are extracted by CNNs and attention mechanisms are added to obtain context information within the single modality; then, we propose a cross-modality feature fusion module (CFFM) to obtain the cross-modality information, and introduce a feature contribution weight training module (CWTM) to allocate the different contributions of the two modalities to the target task. Finally, the result of 6D object pose estimation is obtained by the final cross-modality fusion feature. By enabling information interactions within and between modalities, the integration of the two modalities is maximized. Furthermore, considering the contribution of each modality enhances the overall robustness of the model. Our experiments indicate that the accuracy rate of our method on the LineMOD dataset can reach 96.9%, on average, using the ADD (-S) metric, while on the YCB-Video dataset, it can reach 94.7% using the ADD-S AUC metric and 96.5% using the ADD-S score (<2 cm) metric.

Keywords:
Artificial intelligence Modality (human–computer interaction) Pose Computer science Computer vision Feature (linguistics) RGB color model Point cloud Robustness (evolution) Pattern recognition (psychology) Context (archaeology) Metric (unit) Modalities

Metrics

2
Cited By
0.50
FWCI (Field Weighted Citation Impact)
43
Refs
0.61
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Robotics and Sensor-Based Localization
Physical Sciences →  Engineering →  Aerospace Engineering
Image and Object Detection Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

CMFF6D: Cross-modality multiscale feature fusion network for 6D pose estimation

Zongwang HanLong ChenShiqing Wu

Journal:   Neurocomputing Year: 2025 Vol: 623 Pages: 129416-129416
JOURNAL ARTICLE

Human pose estimation based on cross-view feature fusion

Dandan SunSiqi WangHailun XiaChangan ZhangJianlong GaoMingyu Mao

Journal:   The Visual Computer Year: 2023 Vol: 40 (9)Pages: 6581-6597
JOURNAL ARTICLE

Attention-based object pose estimation with feature fusion and geometry enhancement

Shuai YangBin WangJunyuan TaoZhe RuanHong Liu

Journal:   Industrial Robot the international journal of robotics research and application Year: 2025 Vol: 52 (4)Pages: 581-590
BOOK-CHAPTER

HDCP: Hierarchical Dual Cross-Modality Prompts Guided RGB-D Fusion for 6D Object Pose Estimation

Hanxue FuQiangchang Wang

Communications in computer and information science Year: 2025 Pages: 306-317
JOURNAL ARTICLE

Hand-object pose estimation method based on fusion feature enhancement and complementary

Siyuan GuShu Gao

Journal:   Journal of Image and Graphics Year: 2025 Vol: 30 (5)Pages: 1433-1449
© 2026 ScienceGate Book Chapters — All rights reserved.