Saqib NazirLorenzo Vaquero OtalManuel MucientesVıctor M. BreaDaniela Coltuc
Monocular depth estimation and image deblurring are two fundamental tasks in computer vision, given their crucial role in understanding 3D scenes. Performing any of them by relying on a single image is an ill-posed problem. The recent advances in the field of deep convolutional neural networks (DNNs) have revolutionized many tasks in computer vision, including depth estimation and image deblurring. When it comes to use defocused images, the depth estimation and the recovery of the All-in-Focus (Aif) image become related problems due to defocus physics. In spite of this, most of existing models treat them separately. There are, however, recent models that solve these problems simultaneously by concatenating two types of networks in a sequence to first estimate the depth map and then reconstruct the focused image based on it. The experiments with pipeline architectures in other applications have shown that concatenating networks increases the complexity of the overall network, resulting in slower convergence and lower throughput. To alleviate this problem, we propose a DNN that solves the depth estimation and image deblurring in parallel. Our Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch. The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D. Extensive experiments with 2HDED:NET on these benchmarks have demonstrated superior or close performances to those of the state-of-the-art models for depth estimation and image deblurring.
Saqib NazirLorenzo VaqueroManuel MucientesV.M. BreaDaniela Colţuc
Saqib NazirLorenzo Vaquero OtalManuel MucientesV.M. BreaDaniela Colţuc
A. N. RajagopalanSubhasis Chaudhuri
A. N. RajagopalanSubhasis ChaudhuriUma Mudenagudi
K. Venkatesh PrasadRichard J. Mammone