CUED Publications database

Comparison of Gender- and speaker-adaptive emotion recognition

Sidorov, M and Ultes, S and Schmitt, A (2014) Comparison of Gender- and speaker-adaptive emotion recognition. In: UNSPECIFIED pp. 3476-3480..

Full text not available from this repository.

Abstract

Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.

Item Type: Conference or Workshop Item (UNSPECIFIED)
Uncontrolled Keywords: adaptive emotion recognition speaker identification gender recognition
Subjects: UNSPECIFIED
Divisions: Div F > Machine Intelligence
Depositing User: Cron Job
Date Deposited: 17 Jul 2017 20:02
Last Modified: 22 May 2018 08:05
DOI: