Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
dc.contributor.author | Samuel, Kakuba | |
dc.contributor.author | Alwin, Poulose | |
dc.date.accessioned | 2023-02-01T08:42:01Z | |
dc.date.available | 2023-02-01T08:42:01Z | |
dc.date.issued | 2022-11-21 | |
dc.description.abstract | The success of deep learning in speech emotion recognition has led to its application in resource-constrained devices. It has been applied in human-to-machine interaction applications like social living assistance, authentication, health monitoring and alertness systems. In order to ensure a good user experience, robust, accurate and computationally efficient deep learning models are necessary. Recurrent neural networks (RNN) like long short-term memory (LSTM), gated recurrent units (GRU) and their variants that operate sequentially are often used to learn time series sequences of the signal, analyze long-term dependencies and the contexts of the utterances in the speech signal. However, due to their sequential operation, they encounter problems in convergence and sluggish training that uses a lot of memory resources and encounters the vanishing gradient problem. In addition, they do not consider spatial cues that may exist in the speech signal. Therefore, we propose an attention-based multi-learning model (ABMD) that uses residual dilated causal convolution (RDCC) blocks and dilated convolution (DC) layers with multi-head attention. The proposed ABMD model achieves comparable performance while taking global contextualized long-term dependencies between features in a parallel manner using a large receptive field with less increase in the number of parameters compared to the number of layers and considers spatial cues among the speech features. Spectral and voice quality features extracted from the raw speech signals are used as inputs. The proposed ABMD model obtained a recognition accuracy and F1 score of 93.75% and 92.50% on the SAVEE datasets, 85.89% and 85.34% on the RAVDESS datasets and 95.93% and 95.83% on the EMODB datasets. The model’s robustness in terms of the confusion ratio of the individual discrete emotions especially happiness which is often confused with emotions that belong to the same dimensional plane with it also improved when validated on the same datasets | en_US |
dc.description.sponsorship | Kabale University | en_US |
dc.identifier.uri | http://hdl.handle.net/20.500.12493/920 | |
dc.language.iso | en | en_US |
dc.publisher | IEEE | en_US |
dc.rights | Attribution-NonCommercial-NoDerivs 3.0 United States | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/us/ | * |
dc.subject | Emotion recognition | en_US |
dc.subject | Residual dilated causal convolution | en_US |
dc.subject | Multi-head attention, | en_US |
dc.title | Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution | en_US |
dc.type | Article | en_US |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution.pdf
- Size:
- 1.67 MB
- Format:
- Adobe Portable Document Format
- Description:
- Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.71 KB
- Format:
- Item-specific license agreed upon to submission
- Description: