reine Buchbestellungen ab 5 Euro senden wir Ihnen Portofrei zuDiesen Artikel senden wir Ihnen ohne weiteren Aufpreis als PAKET

Emotion Recognition using Speech Features
(Englisch)
SpringerBriefs in Speech Technology
K. Sreenivasa Rao & Shashidhar G. Koolagudi

Print on Demand - Dieser Artikel wird für Sie gedruckt!

44,95 €

inkl. MwSt. · Portofrei
Dieses Produkt wird für Sie gedruckt, Lieferzeit ca. 14 Werktage
Menge:

Emotion Recognition using Speech Features

Seiten
Erscheinungsdatum
Auflage
Ausstattung
Erscheinungsjahr
Sprache
alternative Ausgabe
Vertrieb
Kategorie
Buchtyp
Warengruppenindex
Warengruppe
Laenge
Breite
Hoehe
Gewicht
Herkunft
Relevanz
Referenznummer
Moluna-Artikelnummer

Produktbeschreibung

Discusses complete state-of -art features, models and databases in the context of emotion recognition

Explores implicit and explicit excitation source features for discriminating the emotions

Proposes pitch synchronous and sub-syllabic spectral features, in addition to conventional spectral features, for characterizing emotions

K. Sreenivasa Rao is at the Indian Institute of Technology, Kharagpur, India.
Shashidhar G, Koolagudi is at the Graphic Era University, Dehradun, India.
"Emotion Recognition Using Speech Features” provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems. In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of:- Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; - Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance; - Proposed multi-stage and hybrid models for improving the emotion recognition performance. This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Emotion: Psychological perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Emotion: Speech signal perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . . 91.5 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . . . . 121.8 Brief overview of contributions to this book . . . . . . . . . . . . . . . . . . . . 121.8.1 Emotion recognition using excitation source information . . . 121.8.2 Emotion recognition using vocal tract information . . . . . . . . . 121.8.3 Emotion recognition using prosodic information . . . . . . . . . . 131.9 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Emotion: Psychological perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Emotion: Speech signal perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . . 91.5 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . . . . 121.8 Brief overview of contributions to this book . . . . . . . . . . . . . . . . . . . . 121.8.1 Emotion recognition using excitation source information . . . 121.8.2 Emotion recognition using vocal tract information . . . . . . . . . 121.8.3 Emotion recognition using prosodic information . . . . . . . . . . 131.9 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Emotion: Psychological perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Emotion: Speech signal perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . . 91.5 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . . . . 121.8 Brief overview of contributions to this book . . . . . . . . . . . . . . . . . . . . 121.8.1 Emotion recognition using excitation source information . . . 121.8.2 Emotion recognition using vocal tract information . . . . . . . . . 121.8.3 Emotion recognition using prosodic information . . . . . . . . . . 131.9 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Emotion: Speech signal perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Speech production mechanism . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.2 Source features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.3 System features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2.4 Prosodic features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.3 Emotional speech databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Applications of speech emotion recognition . . . . . . . . . . . . . . . . . . . . 91.5 Issues in speech emotion recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 101.6 Objectives and scope of the work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111.7 Main highlights of research investigations . . . . . . . . . . . . . . . . . . . . . . 121.8 Brief overview of contributions to this book . . . . . . . . . . . . . . . . . . . . 121.8.1 Emotion recognition using excitation source information . . . 121.8.2 Emotion recognition using vocal tract information . . . . . . . . . 121.8.3 Emotion recognition using prosodic information . . . . . . . . . . 131.9 Organization of the book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Speech Emotion Recognition: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . 172.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Emotional speech corpora: A review. . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Excitation source features: A review . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Vocal tract system features: A review . . . . . . . . . . . . . . . . . . . . . . . . . 242.5 Prosodic features: A review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.6 Classification models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.7 Motivation for the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.8 Summary of the literature and scope for the present work . . . . . . . . . 31. . . . . . . . . . . . . . . . . . . . . . . . . 172.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Emotional speech corpora: A review. . . . . . . . . . . . . . . . . . . . . . . . . . . 18>2.3 Excitation source features: A review . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Vocal tract system features: A review . . . . . . . . . . . . . . . . . . . . . . . . . 242.5 Prosodic features: A review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.6 Classification models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.7 Motivation for the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.8 Summary of the literature and scope for the present work . . . . . . . . . 313 Emotion Recognition using Excitation Source Information . . . . . . . . . . 333.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34. . . . . . . . . 333.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34viii Contents3.3 Emotional speech corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.3.1 Indian Institute of Technology Kharagpur-SimulatedEmotional Speech Corpus: IITKGP-SESC . . . . . . . . . . . . . . . 383.3.2 Berlin Emotional Speech Database: Emo-DB . . . . . . . . . . . . . 403.4 Excitation source features for emotion recognition . . . . . . . . . . . . . . . 403.4.1 Higher-order relations among LP residual samples . . . . . . . . 413.4.2 Phase of LP residual signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.4.3 Parameters of the instants of glottal closure (Epochparameters) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.4 Dynamics of epoch parameters at syllable level . . . . . . . . . . . 483.4.5 Dynamics of epoch parameters at utterance level . . . . . . . . . 493.4.6 Glottal pulse parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.5 Classification models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503.5.1 Auto-associative neural networks . . . . . . . . . . . . . . . . . . . . . . . 503.5.2 Support vector machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533.6 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Emotion Recognition using Vocal Tract Information . . . . . . . . . . . . . . . 674.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2.1 Linear prediction cepstral coefficients (LPCCs) . . . . . . . . . . . 694.2.2 Mel frequency cepstral coefficients (MFCCs) . . . . . . . . . . . . . 704.2.3 Formant features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.3 Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.3.1 Gaussian mixture models (GMM) . . . . . . . . . . . . . . . . . . . . . . 734.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78. . . . . . . . . . . . . . 674.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674.2 Feature extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2.1 Linear prediction cepstral coefficients (LPCCs) . . . . . . . . . . . 694.2.2 Mel frequency cepstral coefficients (MFCCs) . . . . . . . . . . . . . 704.2.3 Formant features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.3 Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734.3.1 Gaussian mixture models (GMM) . . . . . . . . . . . . . . . . . . . . . . 734.4 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785 Emotion Recognition using Prosodic Information . . . . . . . . . . . . . . . . . 815.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2 Prosodic features: importance in emotion recognition . . . . . . . . . . . . 825.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.4 Extraction of global and local prosodic features . . . . . . . . . . . . . . . . . 865.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93. . . . . . . . . . . . . . . . 815.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2 Prosodic features: importance in emotion recognition . . . . . . . . . . . . 825.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.4 Extraction of global and local prosodic features . . . . . . . . . . . . . . . . . 865.5 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936 Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956.1 Summary of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956.2 Contributions of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976.3 Conclusions from the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976.4 Scope for future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956.1 Summary of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956.2 Contributions of the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976.3 Conclusions from the present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976.4 Scope for future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97A Linear Prediction Analysis of Speech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101A.1 The Prediction Error Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.2 Estimation of Linear Prediction Coefficients . . . . . . . . . . . . . . . . . . . . 103. . . . . . . . . . . . . . . . . . . . . . . . . . . . 101A.1 The Prediction Error Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103A.2 Estimation of Linear Prediction Coefficients . . . . . . . . . . . . . . . . . . . . 103Contents ixB MFCC Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107C Gaussian Mixture Model (GMM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111C.1 Training the GMMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112C.1.1 Expectation Maximization (EM) Algorithm . . . . . . . . . . . . . . 112C.1.2 Maximum a posteriori (MAP) Adaptation . . . . . . . . . . . . . . . 113C.2 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111C.1 Training the GMMs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112C.1.1 Expectation Maximization (EM) Algorithm . . . . . . . . . . . . . . 112C.1.2 Maximum a posteriori (MAP) Adaptation . . . . . . . . . . . . . . . 113C.2 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116a posteriori (MAP) Adaptation . . . . . . . . . . . . . . . 113C.2 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

"Emotion Recognition Using Speech Features” covers emotion-specific features present in speech and discussion of suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems.

Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about using evidence derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions.

Discussion includes global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; use of complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;  and proposed multi-stage and hybrid models for improving the emotion recognition performance.


"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions. The content of this book is important for designing and developing natural and sophisticated speech systems.In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of:- Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information;- Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;- Proposed multi-stage and hybrid models for improving the emotion recognition performance.This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
Introduction.- Speech Emotion Recognition: A Review.- Emotion Recognition Using Excitation Source Information.- Emotion Recognition Using Vocal Tract Information.- Emotion Recognition Using Prosodic Information.- Summary and Conclusions.- Linear Prediction Analysis of Speech.- MFCC Features.- Gaussian Mixture Model (GMM)
K. Sreenivasa Rao is at the Indian Institute of Technology, Kharagpur, India.
Shashidhar G, Koolagudi is at the Graphic Era University, Dehradun, India.

Über den Autor



K. Sreenivasa Rao is at the Indian Institute of Technology, Kharagpur, India.
Shashidhar G, Koolagudi is at the Graphic Era University, Dehradun, India.


Inhaltsverzeichnis



Introduction.- Speech Emotion Recognition: A Review.- Emotion Recognition Using Excitation Source Information.- Emotion Recognition Using Vocal Tract Information.- Emotion Recognition Using Prosodic Information.- Summary and Conclusions.- Linear Prediction Analysis of Speech.- MFCC Features.- Gaussian Mixture Model (GMM)


Klappentext

"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems.
In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of:
. Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information;
. Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;
. Proposed multi-stage and hybrid models for improving the emotion recognition performance.
This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.




Discusses complete state-of -art features, models and databases in the context of emotion recognition

Explores implicit and explicit excitation source features for discriminating the emotions

Proposes pitch synchronous and sub-syllabic spectral features, in addition to conventional spectral features, for characterizing emotions

Includes supplementary material: sn.pub/extras



Datenschutz-Einstellungen