JOURNAL ARTICLE

Active speaker detection with audio-visual co-training

Abstract

© 2016 ACM. In this work, we show how to co-Train a classifier for active speaker detection using audio-visual data. First, audio Voice Activity Detection (VAD) is used to train a personalized video-based active speaker classifier in a weakly supervised fashion. The video classifier is in turn used to train a voice model for each person. The individual voice models are then used to detect active speakers. There is no manual supervision -Audio weakly supervises video classification, and the co-Training loop is completed by using the trained video classifier to supervise the training of a personalized audio voice classifier.

Keywords:
Computer science Classifier (UML) Speech recognition Audio visual Speaker recognition Artificial intelligence Pattern recognition (psychology) Multimedia

Metrics

26
Cited By
0.95
FWCI (Field Weighted Citation Impact)
22
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Music and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Speech Recognition and Synthesis
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.