JOURNAL ARTICLE

Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation

Abstract

Audio-driven talking-head synthesis is a popular research topic for virtual human-related applications. However, the inflexibility and inefficiency of existing methods, which necessitate expensive end-to-end training to transfer emotions from guidance videos to talking-head predictions, are significant limitations. In this work, we propose the Emotional Adaptation for Audio-driven Talking-head (EAT) method, which transforms emotion-agnostic talking-head models into emotion-controllable ones in a cost-effective and efficient manner through parameter-efficient adaptations. Our approach utilizes a pretrained emotion-agnostic talking-head transformer and introduces three lightweight adaptations (the Deep Emotional Prompts, Emotional Deformation Network, and Emotional Adaptation Module) from different perspectives to enable precise and realistic emotion controls. Our experiments demonstrate that our approach achieves state-of-the-art performance on widely-used benchmarks, including LRW and MEAD. Additionally, our parameter-efficient adaptations exhibit remarkable generalization ability, even in scenarios where emotional training videos are scarce or nonexistent. Project website: https://yuangan.github.io/eat/

Keywords:
Computer science Adaptation (eye) Inefficiency Generalization Human–computer interaction Transformer Multimedia Artificial intelligence Speech recognition Psychology

Metrics

41
Cited By
7.46
FWCI (Field Weighted Citation Impact)
69
Refs
0.97
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Generative Adversarial Networks and Image Synthesis
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Speech and Audio Processing
Physical Sciences →  Computer Science →  Signal Processing
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

BOOK-CHAPTER

EmoTalker: Audio Driven Emotion Aware Talking Head Generation

Xiaoqian ShenFaizan Farooq KhanMohamed Elhoseiny

Lecture notes in computer science Year: 2024 Pages: 131-147
JOURNAL ARTICLE

Audio-Semantic Enhanced Pose-Driven Talking Head Generation

Meng LiuDa LiYongqiang LiXuemeng SongLiqiang Nie

Journal:   IEEE Transactions on Circuits and Systems for Video Technology Year: 2024 Vol: 34 (11)Pages: 11056-11069
© 2026 ScienceGate Book Chapters — All rights reserved.