Katerin RomeoPiet B. W. SchweringMarcel G. J. Breuers
The complementary nature of sensors in the visible light and in the infrared attracts research interest in multi-sensor image sequence analysis. The information made available in each wavelength can be combined. The combination of multi- sensor data with the temporal information makes target detection more robust against different sensor artifacts. In this paper we analyze the moving object segmentation and tracking with the fusion of multi-sensor data in two levels: in the detection level and in the decision level. In the first level the accuracy of the detection of all moving objects is analyzed while in the decision level the moving objects are classified as target or non-target, with the computed information in different wavelengths. Performance evaluation is done through ROC curves and with synthetic degradation methods. In the detection level approach registration techniques are used to transmit detected moving object coordinates from the visible to the infrared band and in the opposite direction. The results are compared to the detection rate in the destination band without fusion. In the decision level tracks from different sensors are fused and evaluated considering new ROC curves. The first promising results of the algorithm applied to the experimental data and the algorithm evaluation are presented.
A.T. AlouaniTheodore R. RiceRonald E. Helmick
Manuel J. Fernández IglesiasTom AridgidesJohn S. Evans
Anthony VidmarKourken Malakian
J. Shannon SwanFrank J. Shields
A.T. AlouaniJohn Edward GrayD. McCabe