Pengfei YinJisu HuXusheng QianYakang DaiZhiyong Zhou
In the past few years, convolutional neural networks (CNNs) have been a major focus in medical image registration. However, it has been proved that CNNs are limited in their ability to represent modal-independent feature and understand the spatial correspondence between different modalities. Therefore, we present CBCRnet for the effective feature representation and correspondence. 1) We propose a novel contrast-reconstruction tasks guided pretraining method for modal-independent feature learning and the unaligned image pairs can be directly imported for pretraining. 2) We propose a bidirectional cross modal attention module to capture the explicit spatial correspondence.Clinical Relevance- Multi-modal deformable medical image registration has many applications in diagnostic medical imaging, organ mapping and surgical navigation [1], such as ablation surgery guided by intraprocedural CT and preoperative MR. Therefore, multi-modal deformable image registration is important in its clinical applications.
Xinrui SongHanqing ChaoXuanang XuHengtao GuoSheng XuBarış TürkbeyBradford J. WoodThomas SanfordGe WangPingkun Yan
Mohammadreza ZolfaghariYi ZhuPeter GehlerThomas Brox