JOURNAL ARTICLE

Multi-View Semi-Supervised Learning Based Image Annotation

Cheng SunSong ZhuZhe Shi

Year: 2014 Journal:   Advanced materials research Vol: 1049-1050 Pages: 1486-1489   Publisher: Trans Tech Publications

Abstract

This paper proposes a novel multi-view semi-supervised learning scheme to improve the performance of image annotation by using multiple views of an image and leveraging the information contained in pseudo-labeled images. In the training process, labeled images are first adopted to train view-specific classifiers independently using uncorrelated and sufficient views, and each view-specific classifier is then iteratively re-trained using initial labeled samples and additional pseudo-labeled samples based on a measure of confidence. In the annotation process, each unlabeled image is assigned appropriate semantic annotations based on the maximum vote entropy principle and the correlationship between annotations with respect to the results of each optimally trained view-specific classifier. Experimental results on a general-purpose image database demonstrate the effectiveness and efficiency of the proposed multi-view semi-supervised scheme.

Keywords:
Annotation Artificial intelligence Classifier (UML) Automatic image annotation Computer science Semi-supervised learning Pattern recognition (psychology) Entropy (arrow of time) Image retrieval Supervised learning Scheme (mathematics) Machine learning Image (mathematics) Mathematics Artificial neural network

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
12
Refs
0.12
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Image Retrieval and Classification Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology

Related Documents

© 2026 ScienceGate Book Chapters — All rights reserved.