Previous web image re-ranking approaches usually construct similarity measure on image level. Considering the diversity of large scale web image database, these approaches ignore the difference of importance between target area and background area in images, thus are not robust to background clutter and may bring some false similarity contribution among images, especially for object query images. We propose ContextRank, re-ranking images by building Markov random process of a surfer jumping across visual words within and among images. For intra-image context, links of visual words are constructed if they are close enough within an image. For inter-image context, links of visual words between image pairs are constructed by incorporating both feature similarity and spatial consistency. Equally speaking, we build a random walk model on visual word level where spatial information is taken into account. The stationary distribution of visual words in each image is derived by calculating the principle eigenvector of link matrix. The score of images are the sum of scores of visual words in each image. Evaluation on object images collected from the Google search engine shows that our approach outperforms VisualRank, and is comparable with the state-of-art. Compared to VisualRank, our method reaches better performance without significant extra computation cost, which is suitable for potential requirement of large scale web image retrieval.
Wengang ZhouQi TianHouqiang Li
Wengang ZhouQi TianLinjun YangHouqiang Li
Jianbo OuyangWengang ZhouMin WangQi TianHouqiang Li
Shuhan QiFanglin WangXuan WangYue GuanJia WeiJian Guan