Sungjoo SuhChangkyu ChoiDu‐Sik Park
In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.
Yuichi TakedaShinsaku HiuraKosuke Sato
Doğa GürsoyDina SheyferMichael WojcikWenjun LiuJonathan Z. Tischler
Doğa GürsoyDina SheyferMichael WojcikWenjun LiuJ. Z. Tischler
Erdem ŞahinChun WangAtanas Gotchev
Hoover RuedaDaniel L. LauGonzalo R. Arce