Tao HuShiliang SunJing ZhaoDongyu Shi
This work proposes a novel unsupervised cross-modality adaptive segmentation method for medical images to tackle the performance degradation caused by the severe domain shift when neural networks are being deployed to unseen modalities. The proposed method is an end-2-end framework, which conducts appearance transformation via a domain-shared shallow content encoder and two domain-specific decoders. The feature extracted from the encoder is enhanced to be more domain-invariant by a similarity learning task using the proposed Semantic Similarity Mining (SSM) module which has a strong help of domain adaptation. The domain-invariant latent feature is then fused into the target domain segmentation sub-network, trained using the original target domain images and the translated target images from the source domain in the framework of adversarial training. The adversarial training is effective to narrow the remaining gap between domains in semantic space after appearance alignment. Experimental results on two challenging datasets demonstrate that our method outperforms the state-of-the-art approaches.
Xin WangFan ZhuYaxin PengChaomin ShenZhen YeChaozheng Zhou
Guodong ZengTill D. LerchFlorian SchmaranzerGuoyan ZhengJürgen BurgerKate GerberMoritz TannastKlaus A. SiebenrockNicolas Gerber
Ziqiang LiuZhaomin ChenHuiling ChenShu TengLei Chen
Sijie HuFabien BonardiSamia BouchafaDésiré Sidibé
Lang ChenYun BianJianbin ZengQingquan MengWeifang ZhuFei ShiChengwei ShaoXinjian ChenDehui Xiang