Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution

dc.contributor.authorSamuel, Kakuba
dc.contributor.authorAlwin, Poulose
dc.date.accessioned2023-02-01T08:42:01Z
dc.date.available2023-02-01T08:42:01Z
dc.date.issued2022-11-21
dc.description.abstractThe success of deep learning in speech emotion recognition has led to its application in resource-constrained devices. It has been applied in human-to-machine interaction applications like social living assistance, authentication, health monitoring and alertness systems. In order to ensure a good user experience, robust, accurate and computationally efficient deep learning models are necessary. Recurrent neural networks (RNN) like long short-term memory (LSTM), gated recurrent units (GRU) and their variants that operate sequentially are often used to learn time series sequences of the signal, analyze long-term dependencies and the contexts of the utterances in the speech signal. However, due to their sequential operation, they encounter problems in convergence and sluggish training that uses a lot of memory resources and encounters the vanishing gradient problem. In addition, they do not consider spatial cues that may exist in the speech signal. Therefore, we propose an attention-based multi-learning model (ABMD) that uses residual dilated causal convolution (RDCC) blocks and dilated convolution (DC) layers with multi-head attention. The proposed ABMD model achieves comparable performance while taking global contextualized long-term dependencies between features in a parallel manner using a large receptive field with less increase in the number of parameters compared to the number of layers and considers spatial cues among the speech features. Spectral and voice quality features extracted from the raw speech signals are used as inputs. The proposed ABMD model obtained a recognition accuracy and F1 score of 93.75% and 92.50% on the SAVEE datasets, 85.89% and 85.34% on the RAVDESS datasets and 95.93% and 95.83% on the EMODB datasets. The model’s robustness in terms of the confusion ratio of the individual discrete emotions especially happiness which is often confused with emotions that belong to the same dimensional plane with it also improved when validated on the same datasetsen_US
dc.description.sponsorshipKabale Universityen_US
dc.identifier.urihttp://hdl.handle.net/20.500.12493/920
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rightsAttribution-NonCommercial-NoDerivs 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/us/*
dc.subjectEmotion recognitionen_US
dc.subjectResidual dilated causal convolutionen_US
dc.subjectMulti-head attention,en_US
dc.titleAttention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolutionen_US
dc.typeArticleen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution.pdf
Size:
1.67 MB
Format:
Adobe Portable Document Format
Description:
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: