Keywords

1 Introduction

Actions carried out by people in daily life require the movements of their hands and fingers, which are controlled by muscles of the forearm, biceps, and triceps. People with amputation or deficiencies in their hands cannot perform many of these activities. Because of this, most of research efforts have focused on restoring the functions that patients can do with their hands and fingers through prosthetic devices [4, 5, 7]. Among these devices, it could be highlight active commercial prostheses such as the i-limb [2], bebionic [1], and Michelangelo [3].

Using hand prosthesis requires an extensive training process with the patient to have optimal control of the prosthesis before starting to use it. In the training process, patients experience a great mental and physical effort to control hand prosthesis with many degrees of freedom, with a reduced number of sEMG signals. Some studies show that many amputees do not use their prosthesis because they are not able to control them efficiently or because the prosthesis does not offer an adequate human-machine interface to be controlled.

In this work, we propose to develop a human-machine interface in real-time based on augmented reality and embedded systems for the acquisition, conditioning, and classification of sEMG signals. The proposed system uses a pattern recognition algorithm based on a multi-layer neural network that classifies motion intentions generated by patients, then these are reproduced by a virtual robotic hand. The virtual prosthesis is configured to perform four types of actions: rest, open hand, power grip, and tripod grip, these correspond to the number of classes recognized by the neural network.

2 Virtual Training Platform

A schematic diagram of the proposed system is presented in Fig. 1. It consists of an inertial measurement unit (IMU), which sense the rotation and acceleration of the whole system in x, y, and z axis. This information is used for position controlling of the prosthesis in the virtual environment and like feature for classification purposes. Also, the system employs 4 independent channels for acquiring and conditioning of sEMG signals. Both IMO and sEMG data are processed by an Arduino mega. This device is responsible for the processing of all the data passed through Analog-to-Digital Converter (ADC) as well as signal sampling, feature extraction, and patient’s movement intention (features classification).

Fig. 1.
figure 1

Schematic diagram of the virtual hand training platform.

As can be seen in Fig. 1, the system establish wireless communication between the Arduino and virtual environment by means the Bluetooth protocol. This communication channel is used to transmit both classification results and inertial measurements from the patient arm toward the virtual environment. The Universal Asynchronous Receiver-Transmitter or UART is the entity that manages the data exchange between Arduino and desktop computer, being the latter who executes the virtual environment (Unity application).

The proposed system is shown in Fig. 2. Patient muscle activity is acquired using four sEMG electrodes, which are attached to a velcro strap (sEMG bracelet). During the experiments, the bracelet is placed on the forearm as it shows Fig. 2. The figure also shows the IMO, Arduino, and sEMG signal conditioning circuit.

Fig. 2.
figure 2

Proposed system of acquisition, conditioning and processing.

3 EMG Conditioner

All sEMG signals are passed through an instrumentation amplifier with a gain of 10, then the resulting signal is band-pass filtered, with upper and lower cutoff frequencies of 500 Hz and 10 Hz, respectively. To reduce the electromagnetic noise induced by the electrical grid, the sEMG signals are passed through a notch filter whose cutoff frequency is 60 Hz. Finally, signals are amplified six times, and its offset level compensated by a combination of amplifiers and voltage followers. Then, signals are input to Arduino through their 10-bits ADCs. Following the Nyquist Theorem, the EMG signal is sampled at 1 KHz to register frequency components up to 500 Hz.

Fig. 3.
figure 3

Traces of MSR and ASS features and triaxial acceleration associated with three movements (open, close, and tripod grip).

3.1 Data Processing

The feature extraction methods include a combination of parameters in the time domain (Absolute value of the summation of Square Root-ASS and Mean value of Square Root-MSR) from a given analysis windows k, where \(x_n\) denote the data within the corresponding analysis window. The ASS computation consists of three principal steps using the data in the analysis window: the square root of all of the values in the analysis window is first computed, the summation of the resultant values is determined, and lastly, the absolute value is computed [6], see Eq. 1. MSR feature is determined from the square root of all the values in a given analysis window, followed by the mean of the resultant values [6], see Eq. 2.

$$\begin{aligned} ASS = \sum _{n=1}^{k-1}\vert \left( x_n\right) ^{1/2} \vert \end{aligned}$$
(1)
$$\begin{aligned} MSR = \frac{1}{k} \sum _{n=1}^{k-1}\left( x_n\right) ^{1/2} \end{aligned}$$
(2)

The performance of the ASS and MSR features was examined in others work from sEMG recordings of eight (four transradial and four transhumeral) amputees by using four different metrics [6]. The results obtained in this study suggest that the proposed time-domain features would potentially improve the overall performance of sEMG pattern recognition control strategy for multifunctional myoelectric prosthesis.

The features vectors are extracted using a sliding analysis windows of 125 ms in length, spaced 50 ms for both training and testing process, besides, the mean of acceleration data is synchronized with EMG signals in time windows. Each sEMG channel and its features vectors (ASS and MSR) are concatenated with average acceleration data, resulting in 11 coefficients (4 channels x 2 features vectors \(+\) 3 data of acceleration/channel). In Fig. 3 is shown the features extracted from the EMG signals.

After computing the signal features according to the previous section, we apply them to a LM multi-layer perceptron neural network. The extracted features correspond to the inputs to a three-layer LM neural network, with 10 nodes in the input layer, 5 nodes in the hidden layer, and 1 node in the output layer (this output represents the estimated joint angle). We chose the network’s architecture and size empirically, aiming at the maximum possible reduction of the final mean squared error (MSE).

For neural network training, we used the same initial weight values for all three network layers (null weight for all neurons). The maximum number of iterations was set to 200 and the stop criterion was MSE of \(10 \, e^{-10}\). For the training process 100 features vector was used and the same number for the testing process. These features represent the rest, hand open, power grip and tripod grip for each channel.

4 Virtual Environment

This module is responsible for recreating in the virtual environment an articulated hand prosthesis that performs a set of preset actions, each of them under user’s demand through sEMG signals. In the virtual environment, the hand prosthesis can animate movements such as open hand, close hand, tripod grip and rest, as well as orienting the hand on the x, y and z axes to reach the object. In Fig. 4 the virtual environment developed in Unity 3D software is shown.

Fig. 4.
figure 4

Virtual environment for patient training.

5 Experimental Protocol

The patient was instrumented with a bracelet of 4-sEMG channel located on her residual arm. The patient gave written informed consent prior to participation. The patient was asked to perform voluntary contractions for identifying muscle with myoelectric activity. During the exercises channel 1 and 4 were the most active.

For identification the channels in the previous phase, the subject is cued to training sessions for modulating the intensity of contraction. In a Labview program, the sEMG signals of voluntary contraction were visualized. These images act as feedback so that the subject check and qualifies the information. The aim of this training phase is for the subject becomes familiar with the adequate muscles to generate the valid sEMG signals.

Once the viable muscles have been identified and the person is able to contract them, emulating a normal action as if the upper limb were intact, it is time for the sEMG signal to be used as input to the processing system. The subject sitting comfortably on a chair was asked to imagine grasping and releasing the object that appears in the virtual environment. In this way, data of rest, open, close and tripod grip are recorded for 3 s. With the collected data, the feature vectors of training and test are obtained to train the neural network off-line. The experimental setup is shown in Fig. 5.

Fig. 5.
figure 5

Experimental setup.

In this study, two types of metrics were adopted to evaluate the performance of the proposed of virtual platform of training. The first metric that evaluates the performance of the use of the ASS and MSR features and inertial data in the classification of gestures using the testing dataset is based on the classification accuracy of classifier as the relationship between the number of correct classifications and the total number of testing samples.

The second metric evaluates the success percentage by developing gestures to grasp and leave the object in the virtual environment.

6 Results

The performance of use the ASS and MSR features and mean value of the acceleration in x, y and z-axes in terms of classification accuracy is shown in Table 1.

Table 1. Success rate for each movement.

Gestures identified from the patient allowed controlling the movements of open, close and tripod grip in the virtual hand. These movements were simulated separately as well as simultaneously. Each movement was tested for 20 trials. Table 2 summarizes the success rate for each movement for the 20 trails. The overall success rate was found to be 86.6 %.

Table 2. Success rate for each movement.

7 Conclusion

In this work, a virtual hand-training platform was presented that allows reproducing gestures obtained from a neural classifier. The platform also allows orienting a virtual hand in a virtual environment developed in Unity to locate and grasp an object.

This platform can be used for patients as a myoelectric prosthesis trainer, thus facilitating the learning process and the study on the suitability of the patient to the management of their prosthetic device.

Average accuracy of 86.6% was obtained for the classification of 3 movements, taking into account that the results are related to one patient. The hypothesis is maintained that the system allows training with few variations in the percentage of success, if the capture and processing conditions of the sEMG signal are maintained and that in the future it can become a tool for training people with amputation.