Advances in computer processing power and emerging algorithms are allowing new ways of envisioning Human Computer Interaction. This paper focuses on the development of a computing algorithm that uses audio and visual sensors to detect and track a user's affective state to aid computer decision making. Using our Multi-stream Fused Hidden Markov Model (MFHMM), we analyzed coupled audio and visual streams to detect 11 cognitive/emotive states. The MFHMM allows the building of an optimal connection among multiple streams according to the maximum entropy principle and the maximum mutual information criterion. Person-independent experimental results from 20 subjects in 660 sequences show that the MFHMM approach performs with an accuracy of 80.61% which outperforms face-only HMM, pitch-only HMM, energy-only HMM, and independent HMM fusion.
Zhihong ZengJilin TuMing LiuThomas S. Huang
David DeanPatrick LuceySridha SridharanTim Wark
Zhitao ZengJ. TuMengkun LiuThomas S. HuangBrian PianfettiDan RothS. Levinson
Zhihong ZengJilin TuBrian PianfettiThomas S. Huang