BOOK-CHAPTER

Towards a Theoretical Framework for Learning Multi-modal Patterns for Embodied Agents

Abstract

Multi-modality is a fundamental feature that characterizes biological systems and lets them achieve high robustness in understanding skills while coping with uncertainty. Relatively recent studies showed that multi-modal learning is a potentially effective add-on to artificial systems, allowing the transfer of information from one modality to another. In this paper we propose a general architecture for jointly learning visual and motion patterns: by means of regression theory we model a mapping between the two sensorial modalities improving the performance of artificial perceptive systems. We present promising results on a case study of grasp classification in a controlled setting and discuss future developments. © 2009 Springer Berlin Heidelberg.

Keywords:
Computer science GRASP Modalities Embodied cognition Modal Artificial intelligence Robustness (evolution) Modality (human–computer interaction) Machine learning Human–computer interaction Software engineering

Metrics

5
Cited By
2.63
FWCI (Field Weighted Citation Impact)
24
Refs
0.90
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Robot Manipulation and Learning
Physical Sciences →  Engineering →  Control and Systems Engineering
Human Pose and Action Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Action Observation and Synchronization
Social Sciences →  Psychology →  Social Psychology
© 2026 ScienceGate Book Chapters — All rights reserved.