Practical speech emotion recognition based on online learning: from acted data to elicited data (Q460386)

From MaRDI portal





scientific article; zbMATH DE number 6354585
Language Label Description Also known as
default for all languages
No label defined
    English
    Practical speech emotion recognition based on online learning: from acted data to elicited data
    scientific article; zbMATH DE number 6354585

      Statements

      Practical speech emotion recognition based on online learning: from acted data to elicited data (English)
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      13 October 2014
      0 references
      Summary: We study the cross-database speech emotion recognition based on online learning. How to apply a classifier trained on acted data to naturalistic data, such as elicited data, remains a major challenge in today's speech emotion recognition system. We introduce three types of different data sources: first, a basic speech emotion dataset which is collected from acted speech by professional actors and actresses; second, a speaker-independent data set which contains a large number of speakers; third, an elicited speech data set collected from a cognitive task. Acoustic features are extracted from emotional utterances and evaluated by using maximal information coefficient (MIC). A baseline valence and arousal classifier is designed based on Gaussian mixture models. Online training module is implemented by using AdaBoost. While the offline recognizer is trained on the acted data, the online testing data includes the speaker-independent data and the elicited data. Experimental results show that by introducing the online learning module our speech emotion recognition system can be better adapted to new data, which is an important character in real world applications.
      0 references

      Identifiers

      0 references
      0 references
      0 references
      0 references
      0 references
      0 references
      0 references