Machines' ability to effectively understand human speech based on visual input is crucial for efficient communication. However, distinguishing between the semantics of speech and the facial appearance poses a challenge. This article presents a taxonomy of 3D talking human face methods, categorizing them into GAN-based, NeRF-based, and DLNN-based approaches. The evolution of mixed-reality telepresence now focuses on developing talking 3D faces that synthesize natural human faces in response to text or audio inputs. Audio-video datasets aid in training algorithms across different languages and enabling speech recognition. Addressing audio data noise is vital for robust performance, utilizing techniques like integrating DeepSpeech and adding noise. Latency optimization enhances the user experience, and careful technique selection reduces latency levels. Quantitative and qualitative evaluation methods measure synchronization, face quality, and performance comparison. Talking 3D faces hold potential for advancing mixed-reality communication, necessitating considerations of audio-video datasets, noise reduction, latency, and evaluation techniques.
Haozhe WuJia JiaHaoyu WangYishun DouChao DuanQingshan Deng
rongliang WuYingchen YuFangneng ZhanJiahui ZhangXiaoqin ZhangShijian Lu
Rongliang WuYingchen YuFangneng ZhanJiahui ZhangXiaoqin ZhangShijian Lu
Yifan XuSirui ZhaoShifeng LiuTong XuEnhong Chen Enhong Chen