Recognition of emotions provoked by auditory stimuli using EEG signal based on sparse representation-based classification

Excitements are important for the proper interpretation of actions as well as relationships among human. Recognizing emotions through Electroencephalogram (EEG) allows the ability to recognize emotional states without traditional methods, including filling in the questionnaire. Automatic emotion recognition reflects the excitement of the individual without clinical examinations or need to visits, which plays a very important role in completing the Brain-Computer Interface (BCI) puzzle. One of the major challenges in this regard is first to select and extract the proper characteristics/features of the EEG signal, in order to create an acceptable distinction between different emotional states. Another challenge is to select an appropriate classification algorithm to distinguish and correctly label the signals associated with each emotional state. In this paper, we proposed to use Sparse Representation-based Classification (SRC) which addresses both of the mentioned challenges by directly utilizing the EEG signal samples (no need to involve with feature extraction/selection) and then classifying the emotional classes based on class dictionaries which are learned to represent the sparse model of the data of each emotional state. The proposed method is tested on two databases, the first of which is experimentally recorded in our biomedical signal processing lab and the subjects were excited by auditory stimuli, and the second database is taken from Shanghai University, China, in which the subjects were excited by visual stimuli. The results of the proposed method provides more than 80% accuracy for the recognition of two positive and negative emotions and suggest that the proposed method has higher degree of success in the classification of emotions while it avoids the complexity of feature selection/extraction.