JOURNAL ARTICLE

Identifying emotion in speech prosody using acoustical cues of harmony

Abstract

We have studied the prosody of emotional speech using a psychoacoustical model of musical harmony (designed to explain the basic facts of the perception of pitch combinations: interval consonance/dissonance and chordal harmony/tension). For any voiced utterance, the model provides 3 quasi-musical measures: dissonance, tension, and harmonic “modality” of the pitches used. Modality is the most interesting, as it relates to the major and minor modes of traditional harmony theory and their characteristic positive and negative affect. In a study of emotional speech using 216 utterances, factor analysis showed that these measures are distinct from those obtained from basic statistics on the fundamental frequency of the voice (mean F0, range, rate of change, etc.). Moreover, there was a significant correlation between the major/minor modality measure and the positive/ negative affect of the utterance. We argue that, in addition to the traditional acoustical measures, a measure of multiple-pitch combinations, i.e., harmony, is essential for determining the affective character of the tone of voice in speech.

Keywords:
Prosody Harmony (color) Speech recognition Computer science Acoustics Physics

Metrics

5
Cited By
0.15
FWCI (Field Weighted Citation Impact)
10
Refs
0.42
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Neuroscience and Music Perception
Life Sciences →  Neuroscience →  Cognitive Neuroscience
Multisensory perception and integration
Social Sciences →  Psychology →  Experimental and Cognitive Psychology
Hearing Loss and Rehabilitation
Life Sciences →  Neuroscience →  Cognitive Neuroscience

Related Documents

JOURNAL ARTICLE

Psychoacoustic cues to emotion in speech prosody and music

Eduardo CoutinhoNicola Dibben

Journal:   Cognition & Emotion Year: 2012 Vol: 27 (4)Pages: 658-684
JOURNAL ARTICLE

Identifying speakers using their emotion cues

Ismail Shahin

Journal:   International Journal of Speech Technology Year: 2011 Vol: 14 (2)Pages: 89-98
BOOK-CHAPTER

Telegram Bot for Emotion Recognition Using Acoustic Cues and Prosody

Ishita NagSalman Azeez SyedShreya BasuSuvra ShawBarnali Gupta Banik

Communications in computer and information science Year: 2022 Pages: 389-402
JOURNAL ARTICLE

Emotion recognition from speech signals using new harmony features

Bin YangMarko Lugger

Journal:   Signal Processing Year: 2009 Vol: 90 (5)Pages: 1415-1423
JOURNAL ARTICLE

Development of Speech Emotion Recognition Algorithm using MFCC and Prosody

Hyejin KooSoyeong JeongSungjae YoonWonjong Kim

Journal:   2020 International Conference on Electronics, Information, and Communication (ICEIC) Year: 2020 Pages: 1-4
© 2026 ScienceGate Book Chapters — All rights reserved.