Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features

dc.contributor.authorSamuel, Kakuba
dc.contributor.authorAlwin, Poulose
dc.contributor.authorDong, Seog Han
dc.contributor.authorSenior Member, Ieee
dc.date.accessioned2023-02-01T09:00:17Z
dc.date.available2023-02-01T09:00:17Z
dc.date.issued2022
dc.description.abstractThe detection and classification of emotional states in speech involves the analysis of audio signals and text transcriptions. There are complex relationships between the extracted features at different time intervals which ought to be analyzed to infer the emotions in speech. These relationships can be represented as spatial, temporal and semantic tendency features. In addition to emotional features that exist in each modality, the text modality consists of semantic and grammatical tendencies in the uttered sentences. Spatial and temporal features have been extracted sequentially in deep learning-based models using convolutional neural networks (CNN) followed by recurrent neural networks (RNN) which may not only be weak at the detection of the separate spatial-temporal feature representations but also the semantic tendencies in speech. In this paper, we propose a deep learning-based model named concurrent spatial-temporal and grammatical (CoSTGA) model that concurrently learns spatial, temporal and semantic representations in the local feature learning block (LFLB) which are fused as a latent vector to form an input to the global feature learning block (GFLB). We also investigate the performance of multi-level feature fusion compared to single-level fusion using the multi-level transformer encoder model (MLTED) that we also propose in this paper. The proposed CoSTGA model uses multi-level fusion first at the LFLB level where similar features (spatial or temporal) are separately extracted from a modality and secondly at the GFLB level where the spatial-temporal features are fused with the semantic tendency features. The proposed CoSTGA model uses a combination of dilated causal convolutions (DCC), bidirectional long short-term memory (BiLSTM), transformer encoders (TE), multi-head and self-attention mechanisms. Acoustic and lexical features were extracted from the interactive emotional dyadic motion capture (IEMOCAP) dataset. The proposed model achieves 75.50% and 75.82% of weighted and unweighted accuracy, 75.32% and 75.57% of recall and F1 score respectively. These results imply that concurrently learned spatial-temporal features with semantic tendencies learned in a multi-level approach improve the model’s effectiveness and robustness.en_US
dc.description.sponsorshipKabale Universityen_US
dc.identifier.urihttp://hdl.handle.net/20.500.12493/921
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectEmotion recognition,en_US
dc.subjectSpatial featuresen_US
dc.subjectTemporal featuresen_US
dc.subjectSemantic tendency featuresen_US
dc.subjectMulti- head attentionen_US
dc.titleDeep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Featuresen_US
dc.typeArticleen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features.pdf
Size:
1.32 MB
Format:
Adobe Portable Document Format
Description:
Deep Learning-Based Speech Emotion Recognition Using Multi-Level Fusion of Concurrent Features
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: