Journal of Siberian Federal University. Mathematics & Physics / Automated Recognition of Paralinguistic Signals in Spoken Dialogue Systems: Ways of Improvement

Full text (.pdf)
Issue
Journal of Siberian Federal University. Mathematics & Physics. 2015 8 (2)
Authors
Sidorov, Maxim; Schmitt, Alexander; Semenkin, Eugene S.
Contact information
Sidorov, Maxim:Institute of Communications Engineering Ulm University Albert Einstein-Allee, 43, Ulm, 89081 Germany; ; Schmitt, Alexander:Institute of Communications Engineering Ulm University Albert Einstein-Allee, 43, Ulm, 89081 Germany;; Semenkin, Eugene S.:Institute of Computer Science and Telecommunications Siberian State Aerospace University Krasnoyarskiy Rabochiy, 31, Krasnoyarsk, 660014 Russia;
Keywords
recognition of paralinguistic signals; machine learning algorithms; speaker-adaptive emotion recognition; multimodal approach
Abstract

The ability of artificial systems to recognize paralinguistic signals, such as emotions, depression, or openness, is useful in various applications. However, the performance of such recognizers is not yet perfect. In this study we consider several directions which can significantly improve the performance of such systems. Firstly, we propose building speaker- or gender-specific emotion models. Thus, an emotion recognition (ER) procedure is followed by a gender- or speaker-identifier. Speaker- or gender-specific information is used either for including into the feature vector directly, or for creating separate emotion recognition models for each gender or speaker. Secondly, a feature selection procedure is an important part of any classification problem; therefore, we proposed using a feature selection technique, based on a genetic algorithm or an information gain approach. Both methods result in higher performance than baseline methods without any feature selection algorithms. Finally, we suggest analysing not only audio signals, but also combined audio-visual cues. The early fusion method (or feature-based fusion) has been used in our investigations to combine different modalities into a multimodal approach. The results obtained show that the multimodal approach outperforms single modalities on the considered corpora. The suggested methods have been evaluated on a number of emotional databases of three languages (English, German and Japanese), in both acted and non-acted settings. The results of numerical experiments are also shown in the study

Pages
208–216
Paper at repository of SibFU
https://elib.sfu-kras.ru/handle/2311/16808