Rui HuangWei FengJizhou SunYaobin Zou
Saliency and co-saliency detection aim to distinguish conspicuous foreground objects from single and multiple images, thus are essential in many multimedia and vision applications. To achieve balanced efficiency and accuracy, most recent successful saliency detectors are based on superpixels. However, saliency detection with single-scale superpixel segmentation may fail in capturing intrinsic salient objects of complex natural scenes with small-scale high-contrast backgrounds. To tackle this problem, we present a simple strategy using multiscale superpixels to jointly detect salient object via low-rank optimisation. Specifically, we first build a multiscale superpixel pyramid and derive the corresponding saliency map by multimodal saliency features and priors at each single scale. Then, we use joint low-rank analysis of multiscale saliency maps to obtain a more reliable and adaptively-fused saliency map, which properly takes all scales saliency into account. We further propose a GMM generative co-saliency prior to enable the above approach to detect co-salient objects from multiple images. Extensive experiments on benchmark datasets validate the effectiveness and superiority of the proposed saliency and co-saliency detector over state-of-the-arts.
Wei FengYaobin ZouRui HuangJizhou Sun
Chung-Chi TsaiXiaoning QianYen‐Yu Lin
Manchuru SreenavyaChandra Mohan Reddy Sivappagari