JOURNAL ARTICLE

Multi-Platform Intelligent System for Multimodal Human-Computer Interaction

Abstract

We present a flexible human-robot interaction architecture that incorporates emotions and moods to provide a natural experience for humans. To determine the emotional state of the user, information representing eye gaze and facial expression is combined with other contextual information such as whether the user is asking questions or has been quiet for some time. Subsequently, an appropriate robot behaviour is selected from a multi-path scenario. This architecture can be easily adapted to interactions with non-embodied robots such as avatars on a mobile device or a PC. We present the outcome of evaluating an implementation of our proposed architecture as a whole, and also of its modules for detecting emotions and questions. Results are promising and provide a basis for further development.

Keywords:
Human–computer interaction Computer science Gaze Facial expression Architecture Embodied cognition Human–robot interaction Embodied agent Robot Artificial intelligence

Metrics

12
Cited By
2.18
FWCI (Field Weighted Citation Impact)
25
Refs
0.85
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Social Robot Interaction and HRI
Social Sciences →  Psychology →  Social Psychology
Robotics and Automated Systems
Physical Sciences →  Engineering →  Control and Systems Engineering
Speech and dialogue systems
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Multimodal AI Assistant for Intelligent Human-Computer Interaction

Journal:   International Research Journal of Modernization in Engineering Technology and Science Year: 2025
BOOK-CHAPTER

Multimodal Human-Computer Interaction

Matthew Turk

Year: 2005 Pages: 269-283
© 2026 ScienceGate Book Chapters — All rights reserved.