A new method for the temporal segmentation of video sequences into real-world objects is proposed. First, each frame undergoes a color quantization step by matching like colors extracted from the previously processed frame. JSEG's color variance feature and texture features from the gray-level co-occurrence matrix (GLCM) are both extracted from each color-quantized frame and combined to obtain a more optimal image segmentation. Finally, a validation step is performed between the segmented regions of the currently processed frame and those in the previous frame, thus matching existing objects between frames and automatically detecting new objects upon their entrance into the scene. The new algorithm is tested on various video segments (pans, zooms, close-ups, and multiple-object motion) with results included
Yuchou ChangDah-Jye LeeYi HongJames K. Archibald
Shengyang YuYan ZhangYonggang WangJie Yang
Zheng YuanjieJie YangYue ZhouYuzhong Wang