Object pose estimation in Cartesian space has found a fundamental role is various applications such as object recognition and visual servoing. Performance and reliability of many of these applications are highly dependent on accuracy and robustness of pose estimation. While monocular systems are usually sufficient for this purpose, multi-camera configurations offer enhanced accuracy and field of view (FOV). Sensor fusion is one smart strategy to exploit data from multiple cameras. This work presents a decentralized sensor fusion scheme which is fault tolerant and does not demand external camera or robot calibrations. Experimental and Simulation data are provided to verify the effectiveness of this scheme.
Bodensteiner, C.Hebel, MarcusArens, M.
Christoph BodensteinerMarcus HebelMichael Arens
Christiano GavaBernd KrollaDidier Stricker
Chris RockwellNilesh KulkarniLinyi JinJeong Joon ParkJustin C. JohnsonDavid F. Fouhey