Hand gesture recognition based on electromyography (EMG) signals is a challenging approach for developing natural and intuitive human-computer interfaces. In this paper, a hand gesture recognition system will be proposed that uses deep learning techniques, specifically a convolutional neural network (CNN) and a long short-term memory (LSTM) by merging them into one architecture called CNN+LSTM model. CNN is used to extract relevant features from the EMG signals, while the LSTM captures the temporal dynamics of the gestures. The proposed model is a fusion technique that combines the strengths of CNN and LSTM. Therefore, incorporating CNN+LSTM would be crucial in improving the accuracy of the model. The proposed system was trained and evaluated on two datasets publicly available. The first one, DualMyo, includes EMG signals recorded from one subject performing 8 different hand gestures, with each class of gestures recorded 110 times. The second dataset was collected from 36 subjects performing 8 different hand gestures. Results demonstrate that our proposed system achieved outstanding performance, with an average recognition accuracy for both data sets of approximately 99% for the DualMyo and about 97% for the second. To tackle the testing time issue, we introduce a second model that incorporates cascading CNN and max pooling layers, achieving a substantial reduction rate of 1/20 compared to the first model in testing time without significantly compromising recognition accuracy significantly, ultimately achieving the shortest testing time with good accuracy compared to the related methods. Experimental results demonstrate the efficacy of this approach, making it suitable for real-time applications in gesture-controlled systems.
|