BOOK-CHAPTER

Multimodal Cues

Abstract

There are numerous challenges to accessing user assistance information in mobile and ubiquitous computing scenarios. For example, there may be little-or-no display real estate on which to present information visually, the user’s eyes may be busy with another task (e.g., driving), it can be difficult to read text while moving, etc. Speech, together with non-speech sounds and haptic feedback can be used to make assistance information available to users in these situations. Non-speech sounds and haptic feedback can be used to cue information that is to be presented to users via speech, ensuring that the listener is prepared and that leading words are not missed. In this chapter, we report on two studies that examine user perception of the duration of a pause between a cue (which may be a variety of non-speech sounds, haptic effects or combined non-speech sound plus haptic effects) and the subsequent delivery of assistance information using speech. Based on these user studies, recommendations for use of cue pause intervals in the range of 600 ms to 800 ms are made.

Keywords:
Haptic technology Computer science Human–computer interaction Perception Task (project management) Variety (cybernetics) Speech recognition Multimodal interaction Multimedia Artificial intelligence Engineering Psychology

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
22
Refs
0.30
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Tactile and Sensory Interactions
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Multisensory perception and integration
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Interactive and Immersive Displays
Physical Sciences →  Computer Science →  Human-Computer Interaction
© 2026 ScienceGate Book Chapters — All rights reserved.