Ruoxin MaShengjie ZhaoSamuel Cheng
For small image registration, feature-based approaches are likely to fail as feature detectors cannot detect enough feature points from low-resolution images.The classic FFT approach's prediction accuracy is high, but the registration time can be relatively long, about several seconds to register one image pair.To achieve real-time and high-precision rigid registration for small images, we apply deep neural networks for supervised rigid transformation prediction, which directly predicts the transformation parameters.We train deep registration models with rigidly transformed CIFAR-10 images and STL-10 images, and evaluate the generalization ability of deep registration models with transformed CIFAR-10 images, STL-10 images, and randomly generated images.Experimental results show that the deep registration models we propose can achieve comparable accuracy to the classic FFT approach for small CIFAR-10 images (32×32) and our LSTM registration model takes less than 1ms to register one pair of images.For moderate size STL-10 images (96×96), FFT significantly outperforms deep registration models in terms of accuracy but is also considerably slower.Our results suggest that deep registration models have competitive advantages over conventional approaches, at least for small images.
Cheolhong AnYiqian WangJunkang ZhangTruong Q. Nguyen
Seda Güzel AydınHasan Şakir BılgeFırat Hardalaç
Tian-Hao ZhangXianhui LiuWeidong Zhao
Yifan ChenZhiyu PanZhicheng ZhongWenxuan GuoJianjiang FengJie Zhou