JOURNAL ARTICLE

CBCRnet: A Contrastive Learning-based Multi-modal Image Registration Via Bidirectional Cross-modal Attention

Abstract

In the past few years, convolutional neural networks (CNNs) have been a major focus in medical image registration. However, it has been proved that CNNs are limited in their ability to represent modal-independent feature and understand the spatial correspondence between different modalities. Therefore, we present CBCRnet for the effective feature representation and correspondence. 1) We propose a novel contrast-reconstruction tasks guided pretraining method for modal-independent feature learning and the unaligned image pairs can be directly imported for pretraining. 2) We propose a bidirectional cross modal attention module to capture the explicit spatial correspondence.Clinical Relevance- Multi-modal deformable medical image registration has many applications in diagnostic medical imaging, organ mapping and surgical navigation [1], such as ablation surgery guided by intraprocedural CT and preoperative MR. Therefore, multi-modal deformable image registration is important in its clinical applications.

Keywords:
Modal Computer science Artificial intelligence Image registration Computer vision Image (mathematics) Materials science

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
13
Refs
0.26
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.