Practical speech emotion recognition based on online learning: from acted data to elicited data (Q460386): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1155/2013/265819 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2107081756 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Vocal communication of emotion: A review of research paradigms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Detecting Novel Associations in Large Data Sets / rank
 
Normal rank
Property / cites work
 
Property / cites work: A decision-theoretic generalization of on-line learning and an application to boosting / rank
 
Normal rank

Latest revision as of 04:23, 9 July 2024

scientific article
Language Label Description Also known as
English
Practical speech emotion recognition based on online learning: from acted data to elicited data
scientific article

    Statements

    Practical speech emotion recognition based on online learning: from acted data to elicited data (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    13 October 2014
    0 references
    Summary: We study the cross-database speech emotion recognition based on online learning. How to apply a classifier trained on acted data to naturalistic data, such as elicited data, remains a major challenge in today's speech emotion recognition system. We introduce three types of different data sources: first, a basic speech emotion dataset which is collected from acted speech by professional actors and actresses; second, a speaker-independent data set which contains a large number of speakers; third, an elicited speech data set collected from a cognitive task. Acoustic features are extracted from emotional utterances and evaluated by using maximal information coefficient (MIC). A baseline valence and arousal classifier is designed based on Gaussian mixture models. Online training module is implemented by using AdaBoost. While the offline recognizer is trained on the acted data, the online testing data includes the speaker-independent data and the elicited data. Experimental results show that by introducing the online learning module our speech emotion recognition system can be better adapted to new data, which is an important character in real world applications.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references