Understanding flood scenes is essential for effective disaster response. Previous research has primarily focused on computer vision-based approaches for analyzing flood scenes, capitalizing on their ability to rapidly and accurately cover affected regions. However, most existing methods emphasize static image analysis, with limited attention given to dynamic video analysis. Compared to image-based approaches, video analysis in flood scenarios offers significant advantages, including real-time monitoring, flow estimation, object tracking, change detection, and behavior recognition. To address this gap, this study proposes a computer vision-based multi-object tracking (MOT) framework for intelligent flood scene understanding. The proposed method integrates an optical-flow-based module for short-term undetected mask estimation and a deep re-identification (ReID) module to handle long-term occlusions. Experimental results demonstrate that the proposed method achieves state-of-the-art performance across key metrics, with a HOTA of 69.57%, DetA of 67.32%, AssA of 73.21%, and IDF1 of 89.82%. Field tests further confirm its improved accuracy, robustness, and generalization. This study not only addresses key practical challenges but also offers methodological insights, supporting the application of intelligent technologies in disaster response and humanitarian aid.
Deepak Kamath KDabir Hasan RizviU AdarshB KarthikR Elumalai
N SushmithaBhola Ram MeenaMr.K.V SriharshaDr.N.V. Ramana Rao
Kaer HuangWeitu ChongHui YangKanokphan LertniphonphanJun XieFeng Chen
B KamalaS. PriyadharshiniK. S. Mahanaga Pooja
Amir Mahmud HuseinKevi Noflianhar LubisDaniel Salim SidabutarYansan YuandaKevry KevryAshwini Waren