JOURNAL ARTICLE

Audio-Driven 3D Talking Face for Realistic Holographic Mixed-Reality Telepresence

Abstract

Machines' ability to effectively understand human speech based on visual input is crucial for efficient communication. However, distinguishing between the semantics of speech and the facial appearance poses a challenge. This article presents a taxonomy of 3D talking human face methods, categorizing them into GAN-based, NeRF-based, and DLNN-based approaches. The evolution of mixed-reality telepresence now focuses on developing talking 3D faces that synthesize natural human faces in response to text or audio inputs. Audio-video datasets aid in training algorithms across different languages and enabling speech recognition. Addressing audio data noise is vital for robust performance, utilizing techniques like integrating DeepSpeech and adding noise. Latency optimization enhances the user experience, and careful technique selection reduces latency levels. Quantitative and qualitative evaluation methods measure synchronization, face quality, and performance comparison. Talking 3D faces hold potential for advancing mixed-reality communication, necessitating considerations of audio-video datasets, noise reduction, latency, and evaluation techniques.

Keywords:
Computer science Latency (audio) Speech recognition Mixed reality Sound quality Face (sociological concept) Synchronization (alternating current) Visualization Noise (video) Human–computer interaction Multimedia Artificial intelligence Virtual reality Telecommunications Image (mathematics)

Metrics

2
Cited By
0.36
FWCI (Field Weighted Citation Impact)
51
Refs
0.55
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Face recognition and analysis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Tactile and Sensory Interactions
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
© 2026 ScienceGate Book Chapters — All rights reserved.