1 Introduction

Brain-Computer Interfaces (BCIs) are communication and control systems allowing users to interact with their environment using their brain activity alone [27]. BCIs based on ElectroEncephaloGraphy (EEG, i.e., recording neurons’ electrical activity on the scalp) are increasing in popularity due to their advantages of having high temporal resolution while being non-invasive, portable and inexpensive compared to BCIs based on other brain sensing techniques (e.g., functional Magnetic Resonance Imaging). In particular, sensorimotor rhythms (SMRs), i.e., oscillations in brain activity recorded from cortical somatosensory and motor areas (detectable in the 8–30 Hz frequency band) [22], are most frequently used. BCIs based on SMRs allow users to send commands to the system without moving, just by doing so-called “Motor-Imagery” (MI) tasks. Indeed, these rhythms are modified while doing a movement, but also while preparing or imagining this movement. Thus, for instance, some applications enable a wheelchair to turn left by doing a specific MI task, such as imagining a left hand movement [18]. BCIs based on this paradigm are known as Motor-Imagery based BCIs. MI-BCIs are not time-locked, which means that users can send commands to the system without waiting for and focusing on stimuli. As no stimulus is required, users can focus their visual attention on another task or on the environment. Consequently, MI-BCIs are attractive to many interactive applications.

However, MI-BCIs have had limited uptake outside laboratories [27] due imperfect classification algorithms and the difficulty for users to learn to control MI-BCI based systems. Indeed, previous studies have shown that around 20 % of users are unable to control an MI-BCI (so called “BCI illiteracy/deficiency”), while the remaining 80 % have relatively modest performances [2]: their mental control commands (MI tasks) are correctly recognised by the system less than 75 % of the time on average.

One utmost important aspect (while not much considered) of MI-BCI performance is the user’s ability to acquire the skills necessary to control the system. Indeed, in order to improve the system’s performances, i.e., its capacity to correctly recognise the MI tasks the user is performing, the latter should be able to (1) generate specific and stable brain activity patterns when performing an MI task, and (2) make these patterns well distinguishable from other MI tasks. Thus, studies have shown that the feedback design was important to favour this skill acquisition, and so to improve users’ performance [16]. To date, most MI-BCI studies involved visual feedback to inform the user about the MI task recognised by the system. Yet, this visual feedback is difficult to assimilate when integrated with the visual layout of the primary interactive application that it supports [9]. Indeed, the visual channel is often overtaxed in interactive environments [15]. Thus, integrating the visual feedback into the application increases the number of visual search tasks. This is a typical branching condition [1] where users have poor performance in searching for visual objects [11].

On the other hand, tactile feedback, although popular in other areas of HCI, has not received much attention for MI-BCI despite its advantages such as: (a) freeing the visual channel in order to reduce cognitive workload [15], (b) maintaining a certain amount of privacy, as it is more difficult to be perceived by the surroundings than the visual or auditory ones, and (c) the possibility to be used in a wide range of interactive tasks, such as in gaming conditions. Using tactile feedback will separate the application channel (visual) from the MI-BCI feedback channel (tactile), thus potentially improving the branching condition of the application. This should consequently increase the user’s performance and system’s efficiency.

In this paper we explore the benefits of providing a continuously updated tactile feedback (i.e., updated at 4 Hz -see Sect. 3.1) to improve MI-BCI users’ performance (i.e., their ability to do MI tasks correctly recognised by the system) in an environment containing visual distracters (Fig. 1). Indeed, BCIs are inherently developed to promote interaction. Yet, most MI-BCI studies test their feedback efficiency (1) in a laboratory context, i.e., with no distracters and (2) with no side task, while in real applications such as games users would have to perform multitasking. Thus, the efficiency of these feedbacks cannot be guaranteed in an interaction and multitasking context. This is why we study our tactile feedback’s efficiency by comparing it to an equivalent visual feedback, (1) in a context including visual distracters (to mimic an interaction environment) and (2) by adding a counting task (to evaluate the cognitive resources needed to process each kind of feedback). Our tactile system is in the form of a wearable glove that integrates five vibrotactile actuators for each hand to provide continuous tactile feedback to the user regarding the BCI output. This expands the user’s feedback bandwidth while reducing the visual cognitive load. Through a first user-study we calibrated the device to the users’ feedback preference while matching the visual feedback fidelity to this tactile feedback (so that both feedbacks can be rigorously compared). In a second study we compared the tactile and visual feedback in an environment containing distracters and found that users obtained significantly better MI-BCI performances with tactile feedback. Our results suggest that tactile feedback is a powerful modality for MI-BCI in an HCI context.

Fig. 1.
figure 1

A participant receives tactile feedback while controlling the MI-BCI. Vibrations from the motor on the right hand represent the current feedback provided by the MI-BCI system.

The main contributions of this paper are: (1) design and implementation of a glove that provides continuously updated vibrotactile feedback to the user’s hands with a fidelity comparable to standard visual feedback; (2) evaluation of our tactile feedback glove in an environment including visual distracters and in a multitasking context: our results suggest that users have better MI-BCI performances and better scores at counting distracters with continuous tactile feedback than with visual feedback.

2 Related Work

2.1 Visual Feedback for MI-BCIs

Usually in MI-BCI training protocols, a visual feedback is provided (Fig. 2). It gives information to the user about the classifier output (for more details, see Sect. 5.5): the label (i.e., which MI task was recognised by the classifier, e.g. left-hand MI), and the confidence value (e.g., the probability estimate for the selected MI task). Standard MI-BCI training protocols display this feedback visually as an extending bar [22]. The direction of the bar depends on the classifier label (e.g., left direction if a left-hand MI is recognised) and its length is proportional to the BCI output. The visual appearance of the bar has been varied in many studies, but the principle did not change [20]. While simple to implement and very intuitive, visual feedback is often boring [16] and may result in decreased motivation and bad user-experience. In a bid to maintain motivation, some researchers have designed gamified MI-BCI training protocols: [17] proposed two simple games based on the ball-basket paradigm and on a spacecraft avoiding bombs. Other studies, reviewed in [14], went even further and integrated virtual reality in MI-BCI training protocols. The gamification of the protocols appeared to be efficient since these studies revealed a better user performance compared to standard protocols.

Fig. 2.
figure 2

Timing and visualisation of a standard MI-BCI training protocol

However, these protocols have two weak points: (1) they propose a feedback which is specific to the MI-BCI training protocol: thus, will the learned skills be transferable to another MI-BCI task? (2) All these feedbacks involve the visual channel, which is often overtaxed in interaction situations. Nevertheless, using a visual feedback independent from the interactive application (e.g., the game or navigation task) would force the user to split his attention between different visual information (the game and the MI-BCI feedback), thus demanding more cognitive resources [15].

This led the BCI-community to investigate other feedback modalities. Several studies explored auditory feedback, in which the different classifier output values were represented as variations of frequency [6], intensity [17] or pitch [12, 21]. Auditory feedback has been shown to be efficient for patients in a locked-in state as they often suffer from visual impairment and sensory loss [13]. However, as the auditory channel is also much demanded in interaction contexts, auditory feedback does not seem to be more relevant than visual feedback.

We argue that haptic feedback could present many advantages. First, the tactile channel is usually less overtaxed than the visual and auditory ones in situations of interaction. Thus, multimodality could make the information processing easier by avoiding the cognitive overload due to the visual channel overtaxing [25]. Second, tactile feedback is more personal than visual and auditory feedbacks as it is difficult to perceive for other people (which can be appreciated, e.g., for multiplayer games).

2.2 Tactile Feedback for MI-BCIs

On the one hand, tactile feedback for MI-BCIs has been mainly used in a medical context. Indeed, [26] explored lingual electro-tactile stimulation, as the tongue provides an excellent spatial resolution, and its sensitivity is preserved in the case of spinal cord injuries; while [8, 23] focused on proprioceptive feedback (i.e., information about the limbs’ position and about the strength developed while performing a movement) and show that proprioceptive feedback allows increasing BCI performance, indicating that these alternative feedback are very promising for patients. However, these kinds of tactile feedback are quite invasive and expensive. Thus, they do not seem to be relevant for HCI applications targeting the general public.

However, a few studies explored tactile feedback for general public applications. Most of these studies in which haptic feedback has been chosen to inform the user about the classifier output used vibrotactile feedback with either a variation of the vibration patterns (e.g., different motor activation rhythms according to the classifier output) [4] or variations in spatial location [5, 17].

Results show benefits when coupled with visual feedback, but only when the vibrotactile feedback maps the “stimulus” location (i.e., the MI task the participant has to perform). This relationship is known as “control-display mapping” [24]. For example, when a right-hand MI is recognised, tactile feedback provided to the right part of the body will be more efficient (i.e., associated with better performance and user experience) than tactile feedback provided to the left side. Results also show similar performances between visual and tactile feedback, and the participants reported that tactile feedback was more natural than visual feedback, although negative feedback due to a misclassification of the mental task can be annoying. Nevertheless, [5, 15] suggest that although it is disturbing, negative vibrotactile feedback has no impact on classification (i.e., it does not affect the brain patterns used by the system to recognise the MI tasks). A few studies already attempted to use continuous vibrotactile feedback [5, 9, 15]. For instance, concerning [5] a continuous tactile feedback is proposed in one of their studies. However, their set up is different from ours: feedback is provided on the neck (as opposed to the palm of the hand), only updated every 2 s (as opposed to every 0.250 s) and more importantly, the feedback has not been tested in a BCI control context. In [9], a comparison between visual and tactile feedback was proposed, and the findings showed that they are associated with equivalent performances in a BCI context. In [15], visual and tactile feedback were compared in the context of a visual attention task performed using a BCI. In the latter study, tactile feedback was shown to be associated with better performances than the visual one. Unfortunately, these studies present some limitations. First, the samples are small: 6–7 subjects. Second, and most importantly, as they used within subject comparisons and that the conditions were not counterbalanced (the visual feedback was always tested before the tactile feedback), one cannot rule out that these results are due to an order effect. Finally, while the feedbacks were tested in presence of distracters [15], it is not a multitasking context as the visual attention task and the MI-BCI control task have been performed sequentially. In our paper, we propose to overcome these limitations with a larger sample (18 participants), a counter-balanced between-subject paradigm and an MI-BCI control task combined with a counting task requiring supplementary cognitive resources.

3 Design of Visual and Vibrotactile Feedback

The main goal of our work is to compare the standard visual feedback with an equivalent tactile feedback in a context of multitasking and in an environment containing distracters in order to mimic possible interaction situations in which MI-BCIs could be used, e.g., a video game. Thus, in this section we first explain how we designed our vibrotactile and corresponding visual feedback. Then we describe the developed hardware prototype and the design of the glove for providing this tactile feedback at the hand, as well as the mapping between visual and tactile stimuli.

3.1 Temporally Continuous Tactile Feedback

As pointed out earlier, the MI-BCI classifier output, which is usually provided as feedback to the user, is the combination of the label of the recognised MI task and the confidence value of the classifier in the recognition of this task. These two elements can be presented as a value in the range of [−0.5, 0.5]. Negative values correspond to a left hand MI recognition while positive values correspond to right hand MI recognition. The closer these values are to the end of the range, the higher the confidence level of the classifier (e.g., for right hand MI the value 0.45 represents a higher confidence level than 0.16). Our goal was to represent this output via the tactile channel as closely as possible to the standard visual feedback (in which the output is represented as a bar varying in length and direction).

The MI-BCI system relies on left- and right-hand MI. Thus, we decided to give tactile feedback to the hands to maintain the control-display mapping [24] between the intended user actions (MI) and the sensory information perceived by the user (the tactile feedback). Indeed, control-display mapping has been shown to be necessary so that tactile feedback is efficient [24]. The large surface of the palm (the average width is 74 mm for women, 84 mm for men) makes it possible to create a tactile display suitable for representing the MI-BCI classifier output (Fig. 2). Indeed, considering the two-point threshold of the palm (~ 8 mm [7]), the width of the actuators, 8 mm, and the fact that we wanted our design to be suitable for most of the users (and thus narrower than the average palm width, 74 mm), we determined that we could put 5 motors maximum on each hand. Thus, we divided the classifier output range of [−0.5, 0.5] into 10 discrete levels, with 5 levels on the left and 5 levels on the right hand. Vibrations on the left/right palm corresponded to the recognition of a left/right hand MI by the classifier, respectively. With the palms being facing upwards, vibrations near the thumbs corresponded to high confidence levels (close to |0.5|) while vibrations near the little finger corresponded to low confidence levels (close to 0).

Standard MI-BCI update rates, i.e., 16 Hz (62.5 ms), can be difficult to achieve with tactile feedback as a stimulus should be provided for at least 200 ms to be easily recognisable over the tactile channel [7]. Consequently, we chose an update rate of 4 Hz (every 250 ms), to ensure a perceivable tactile feedback.

3.2 Visual Feedback

Standard visual feedback corresponds to a continuous bar varying in length and direction. To make both the visual and tactile feedback as similar as possible, and because the tactile feedback has been spatially discretised (classifier output range of [-0.5, 0.5] divided into 10 discrete levels), we also discretised the standard bar in the same way (Fig. 3). Thus, the feedback was displayed as a red cursor on a cross, with 5 ticks on the left and 5 ticks on the right side (Fig. 2). The cursor was on the left/right side of the cross when a left/right hand MI was recognised, respectively. Moreover, the cursor moving to the extremities of the cross represented high confidence values. Finally, we also reduced the standard update rate of 16 Hz to 4 Hz so that it fits the tactile feedback update rate.

Fig. 3.
figure 3

Visual feedback with current feedback symbolising the recognition of a right hand MI, at level 3/5.

3.3 Hardware Design

To provide the user with tactile feedback, we designed a glove for the left and the right hand in which 5 vibrotactile actuators were embedded (Fig. 4). The actuators were cylindrical vibration motors (model 307-100 by Precision Microdrives, Fig. 4, left). Each motor is 8.0 mm wide and 25 mm long. The motors were connected to a custom-built motor shield and were controlled by pulse-width modulation using an Arduino Due. The ten motors were powered from an external supply (2 V).

Fig. 4.
figure 4

Left: A vibration motor. Right: Our gloves with 10 embedded motors (5 per hand). In the tactile feedback condition, individual motors are activated to represent the classifier output.

4 STUDY I - Determining the Intensity and the Pattern of Activation of the Motors for the Vibrotactile Feedback

Some previous studies have explored continuously updated feedback for MI-BCIs [5, 9, 15], but not much work has been led in order to evaluate the optimal parameters for this feedback modality. For instance, should the vibration pattern be encoded as localised vibration from a single motor, or as simultaneous vibrations of multiple neighbouring motors to represent a specific classifier output? Another question concerns the tactile stimulus intensity. Indeed, the vibration should be strong enough to be perceived but not too intense, as it could distract the user and be uncomfortable. In the next section, we describe the user study conducted to investigate these issues.

4.1 Experimental Design

The aim of this study was to determine the pattern and intensity of vibration which provide the user the most pleasant and distinct feedback. We investigated two designs of vibration patterns for representing the classifier output. One design implemented localised vibration, i.e., only one of the vibration motors was active at a given time (e.g., the third motor of the right hand if a right-hand MI was recognised with a confidence value in [0.2, 0.3]). The other design implemented simultaneous vibration of neighbouring motors. The latter pattern entailed activating all motors of the hand corresponding to the recognised MI task whose index value was smaller or equal to the current classifier level (e.g., the first, second and third motors of the right hand, from left to right, if a right-hand MI was recognised with a confidence value in [0.2, 0.3]). The rationale between these two designs was to (1) maintain the spatial mapping between the visual and tactile feedback and (2) to indicate the relative change in the classifier’s output.

Our first informal test of the motors (2 V) revealed a strong unpleasant tactile stimulus (the normalized vibration amplitude of the motor was 3G relative to a 100 g mass). In order to design more subtle tactile stimuli, we adjusted the voltage used to control the motors (pulse-width modulation), which implicitly changed the motor’s vibration frequency and vibration amplitude. The experiment followed a 2 × 4 within-participant design with the factors:

  • Pattern: localised vs. simultaneous vibration;

  • Intensity: [0.1, 0.3, 0.5, 1] G with corresponding frequencies of [10, 40, 60, 85] Hz.

4.2 Participants

Ten volunteers (4 women, with age 28.8 ± 8.2) from the local university participated in this study. Some participants had previous experience with vibrotactile feedback but none of them had participated in this experiment before.

4.3 Task and Procedure

At the beginning of the study the experimenter informed the participants about the goal of the study and asked them to sign a consent form which was approved by the University’s Ethics Committee. The participants were then asked to put on the gloves and to place their hands on the table in front of them in a supine position (palms facing upwards, as in Fig. 4-right).

We designed 8 vibration sequences which simulated vibrotactile feedback. As in a real scenario, these sequences were provided for 4 s, during which 16 tactile stimuli appeared (4 Hz update rate). We varied the factors Pattern and Intensity to compare:

  1. a.

    Localised to simultaneous vibrations with the same intensity level (4 possibilities);

  2. b.

    Localised vibration at 2 different intensity levels (6 possibilities);

  3. c.

    Simultaneous vibration at 2 different intensity levels (6 possibilities).

We considered both presentation orders for the patterns in (a), i.e., first localised then simultaneous and vice versa, and for the intensities in (b, c), i.e., first intensity 1 then intensity 2 and vice versa. Overall, we tested (4 + 6+6)*2 = 32 combinations. We randomly assigned one of the eight sequences at each of the combinations (so that they are not associated with the same combination for the different participants). For each combination, we asked the participants their favourite feedback, i.e., localised or simultaneous for (a) and intensity 1 or intensity 2 for (b, c). Thus, we evaluated the quality of the different patterns and intensities according to the number of times they were selected as the favourite one. This paradigm allowed us to find the best pattern*intensity association, which was the most often chosen as the favourite one.

4.4 Results

Figure 5 reveals a Pattern x Intensity interaction [F (3,72)  = 8.785, p < 0.001, η2 = 0.268], a main effect of the pattern [F (1,72)  = 10.184, p < 0.005, η2 = 0.124], and a main effect of the intensity [F (3,72)  = 6.071, p < 0.005, η2 = 0.202]. Participants preferred the localised vibration over the simultaneous vibrations. Moreover, they preferred the lowest intensity in the case of simultaneous vibrations (the other ones being perceived as too strong). For the localised vibration, however, the lowest intensity (0.1G, 10 Hz) was barely noticeable and did not allow the participants to clearly perceive the tactile feedback. The highest frequency was associated with a very strong and uncomfortable sensation. Thus, they preferred the middle intensity (0.3–0.5G).

Fig. 5.
figure 5

Average number of times that a pattern was preferred as a function of its intensity.

4.5 Discussion

The results of this study suggest that the participants preferred a localised vibration at the palm, with only one vibration motor being active at a given time. Our findings also suggest that either 0.3G (40 Hz) or 0.5G (60 Hz) is appropriate for providing tactile feedback at the palm using the developed tactile feedback system.

These findings provide first guidelines on how to design tactile feedback for stimulating the palm in an MI-BCI context. In addition, these results can inform the design of feedback for other interactive tasks in HCI which require a similar presentation of feedback to the user.

5 STUDY II – Comparing Visual and Tactile Feedback in a Multitasking Context

5.1 Training Environment

BCIs are developed to be used in interactive applications (e.g., video games or navigation tasks), i.e., in a context including distracters and requiring multitasking abilities. Thus, it seems irrelevant to test the efficiency of a feedback outside this kind of context, i.e., in laboratory conditions by doing only a MI-BCI task. This is why we designed a training environment including visual distracters and asked the participants to perform a counting task at the same time they were performing the MI-BCI task (Fig. 6, right).

Fig. 6.
figure 6

Two feedback types representing the recognition of a right-hand MI, at level 3 out of 5. Left: Visual feedback was displayed as a red circle moving along the axis and vibrotactile feedback at the palm was encoded as a vibration of the corresponding motor; Right: Environment visualisation with all elements: an enemy (top right), the spacecraft (centre), and visual feedback (lower centre, below the spacecraft); three distracters: missile (top left), cloud (top centre), and rabbit (bottom centre) (Color figure online).

By adding these elements, we were able to compare the cognitive workload required to process each kind of feedback in an interactive situation and to evaluate how cognitive multitasking (branching [1]) influences the efficiency of each feedback.

In order to include the distracters and the counting task to the MI-BCI task in a consistent environment, we modified the standard MI-BCI training protocol [22]. The standard arrows pointing left or right to inform the user he has to perform a left or right-hand MI have been replaced by a spacecraft the goal of which was to protect its planet by destroying bombs coming from the left or right (controlled by performing left- or right-hand MI, respectively) (Fig. 6).

Besides, the distracters were appearing randomly in the form of (1) a missile, which was launched in a vertical direction from a tank, (2) a rabbit crossing from the left to the right, or (3) a cloud crossing from the right to the left (Fig. 4). Each distracter appeared for a similar amount of time (approximately 2.5 s).

5.2 Participants

Eighteen healthy volunteers (5 women; aged 27.6 ± 4.8) participated in the study. Some of them had previously experienced vibrotactile feedback. However, none of them had previous experience with MI-BCI.

5.3 Experimental Design

After they completed the informed consent form and were notified about the progress of the experiment, the participants have been randomly assigned to one of the two groups: visual or tactile feedback. Consequently, nine participants were provided with visual feedback, and the other nine participants were provided with vibrotactile feedback during the whole experiment. The experiment was divided into 6 runs, each of 7 min duration. The first run was used to train the MI-BCI classifier. The remaining 5 runs were used for the user training and data recording. Each run was composed of 40 trials: 20 left-hand MI and 20 right-hand MI trials, randomly distributed.

Thus, the experiment followed a 2 × 5 between-subjects ANOVA with the factors:

  • Feedback Condition: visual or tactile;

  • Runs: 15.

During the experiment, the participants had to control a spacecraft (shown at the centre of the screen in Fig. 4) by performing left- or right-hand MI tasks to make it move left or right, respectively. The goal of this spacecraft was to protect the planet against bombs falling down on the planet. Thus, when a bomb was falling off the left/right side of the screen, participants had to perform left/right-hand MI in order to make the spacecraft move left/right, face the bomb and destroy it. The application was developed in C# using Microsoft/XNA 4.0.

Each trial was lasting around 8 s and had the same structure, described hereafter and depicted in Fig. 7. At the beginning, the spacecraft was in the middle of the screen for 3 s. Then the instruction was given to the participant as a bomb appearing either at the top left or right of the screen and moving vertically towards the planet (at a speed of one pixel/frame). This instruction informed the participant about the command to perform: a right-hand or a left-hand MI, in order to move the spacecraft to the right or to the left, respectively, face and destroy the enemy. 1.25 s after the appearance of the bomb, the MI-BCI classifier output was provided to the participant continuously for a duration of 3.75 s, either in the form of a moving cursor on a visual cross at the lower centre of the screen, or as vibrotactile feedback at the palm. At the end of the feedback period, the mean classifier output was calculated and the spacecraft was moving to the left or right based on this value (e.g., to the left if the mean classifier output was in [−0.5,0) and to the right if the mean classifier output was in (0, 0.5]). If the correct MI task was recognised, the spacecraft aligned its position with the bomb and shot to destroy the bomb. Otherwise, the bomb speeded up (20 pixels/frame) and exploded when it reached the planet.

Fig. 7.
figure 7

Timing of a trial.

Furthermore, as explained in the previous section, during each trial, one or more distracters were appearing between the moment when the enemy was displayed and the moment when the spacecraft started to move in order to destroy the enemy. Each distracter type appeared at most once during each trial. In each run, which consisted of 40 trials, each distracter type appeared at least 15 times and at most 25 times. At the beginning of each run the participants were asked to count how many distracters of a specified type appeared, and to report this number at the end of the run.

5.4 Score Calculation

At the end of the trial, the score was updated according to the following formula:

$$ {\textsc{New~Score}} = {\textsc{Current~Score}} + {\textsc{Class~Label}}*{\textsc{ Classifier~Output}}*{ 2}00 $$

The Class-Label was {−1} if a left-hand MI was recognised and {+1} if a right-hand MI was recognised. The Classifier-Output was the mean classifier output value calculated at the end of the trial: in [−0.5,0) if a left-hand MI was recognised, and in (0,0.5] if a right-hand MI was recognised. Therefore, after each trial, the score was increased or decreased by 100 points maximum, given that to obtain 100 points at one trial, the mean classifier output value of the trial had to be 0.5, which means that the classifier output had to be 0.5 for each of the 15 time windows (the feedback being updated at 4 Hz for 3.75 s). This value of 0.5 thus means that the classifier was 100 % sure that the participant was performing a right-hand MI for each of the 15 time windows. This never happens in MI-BCI. Besides, while the mean classifier output is positive, it means that the trial has been correctly classified. Thus, to take an extreme case, a score of 40/4000 at the end of the run (e.g., 1/100 at each of the 40 trials of the run) could be associated with a classification accuracy of 100 % (as each mean classifier output was positive, it means that all the trials have been correctly classified). The MI score corresponded to the sum of the scores obtained in each trial. Furthermore, at the end of each run, the participant was asked to report the number of distracters (rabbits, clouds or rockets) he counted. If this number was correct, the participant was rewarded with 200 points being added to the MI score. If the error was in the order of ± 1, the score remained unchanged. Otherwise, 200 points were subtracted from the MI score. The final score corresponded to the sum of the MI scores for the 40 trials of the run and the counting task score. While arbitrary, this metric enabled to consider and give a significant weight to both the MI score and the counting task which allowed to evaluate the feedback relevance for both these aspects.

5.5 EEG Recordings and Signal Processing

The EEG was recorded from a BrainVision actiCHamp amplifier from Brain Products, using 30 scalp electrodes (F3, Fz, F4, FT7, FC5, FC3, FCz, FC4, FC6, FT8, C5, C3, C1, Cz, C2, C4, C6, CP3, CPz, CP4, P5, P3, P1, Pz, P2, P4, P6, PO7, PO8, 10–20 system), referenced to the right mastoid and grounded to AFz. Such electrodes cover the sensori-motor cortex, where EEG variations due to MI can be measured. EEG and data were sampled at 256 Hz. First, EEG signals were band-pass filtered in 8–30 Hz (containing the SMR rhythms) [22]. At the end of the first run, which served for training the classifier, a Common Spatial Pattern algorithm [19] was used for each user on the collected data, to find 6 spatial filters whose resulting EEG power was maximally different between the two MI tasks. The spatially filtered EEG signals power (computed on a 1 s time window, with 250 ms overlap between consecutive windows) was used to train a linear Support Vector Machine (SVM) [19]. The SVM was then used online to differentiate between left- and right-hand MI during the 5 user-training runs. The SVM classifier provided a probability value in [0, 1] indicating which of the two classes the signal belongs to. For convenience, we subtracted the value 0.5 to the classifier output so that negative values, in [−0.50–0.00), corresponded to left-hand and positive values, in (0.00–0.50], to right-hand MI recognition.

5.6 Results

The main measurements of interest are (1) the final score (the sum of the MI task and the counting task scores), (2) the MI score alone, and (3) the absolute value of the difference between the counted and the actual number of distracters. These measures were analysed using three two-factor (independent) ANOVAs. We performed a 2-way ANOVA so that we can analyse the interaction between both variables. However, given the low number of participants per condition (8 and 9) it was not possible to test the prerequisites for this analysis. Thus, we computed the effect sizes to ensure the robustness of our results. Analyses have been performed on 17 participants: 8 in the visual condition and 9 in the tactile condition. The data from one outlier participant have been removed as his final score (1628.8 ± 630.5) differed considerably from his group mean final score (183.0 ± 559.5).

The two-factor ANOVA on the final score shows a main effect of the Feedback-Condition (visual vs. tactile) [F (1,15)  = 6.327, p < 0.05, η2 = 0.291], a main effect of the Run [F (1,15)  = 3.961, p < 0.01, η2 = 0.457] but no Run * Feedback-Condition interaction [F (1,15)  = 1.476, p = 0.243, η2 = 0.09]. The Feedback Condition effect is due to participants in the tactile feedback group having significantly better results than participants in the visual feedback group. Furthermore, concerning the Run main effect, post hoc analysis shows a significant increase of performance between Run 1 and Run 5 (p < 0.005) (Fig. 8) which reveals the learning effect of the performed motor-imagery task, as indicated by the large effect size.

Fig. 8.
figure 8

Average of the final scores (with standard error): sum of the MI task score and the distracter counting task score (reward and penalty).

The two-factor ANOVA on MI scores (Fig. 9, left) shows strong tendencies towards a Run main effect [F (1,15)  = 3.961, p = 0.065, η2 = 0.209] and towards a Feedback Condition effect [F (1,15)  = 4.063, p = 0.062, η2 = 0.213], as well as no interaction between these two factors [F (1,15)  = 1.207, p = 0.289, η2 = 0.074]. These results indicate a strong tendency towards a better MI score with tactile feedback than with visual feedback and a tendency towards an improved MI score across the Runs.

Fig. 9.
figure 9

Left: Average of the MI scores (with standard error) without the counting task (reward and penalty). Right: Average of the distracter errors (difference between the counted and the actual number) for the counting task as a function of Run number and Feedback Condition.

The two-factor ANOVA on the counting task (Fig. 9, right) shows a main effect of the Run [F (1,15)  = 9.806, p < 0.01] but no main effect of the Feedback Condition [F (1,15)  = 2.860, p = 0.111] and no Run * Condition interaction [F (1,15)  = 0.000, p = 0.990]. Thus, the participants improved their performance across the Runs for the counting task. Indeed, post hoc analysis shows a significant difference between Run 1 and Run 4 (p < 0.001) and Run 1 and Run 5 (p < 0.005) performances.

5.7 User-Experience Results

As no adapted BCI user-experience questionnaire exists, we proposed a customised one, designed to measure four dimensions of usability: learnability/memorability (LM), efficiency/effectiveness (EE), safety, satisfaction. The 1 factor ANOVA did not reveal any differences between the visual and tactile feedback conditions: LM \( [{\bar{\text{X}}}_{visual} = 60. 4 7\, \pm \, 10. 5 2,{\bar{\text{X}}}_{tactile} = { 56}. 5 3\, \pm \, 1 3. 4 6- F_{(1,17)} = 0. 4 4 4,\,\,p = \, 0. 5 1 5] \), EE \( [{\bar{\text{X}}}_{visual} = 6 7. 8 6\, \pm \, 1 3. 7 2,{\bar{\text{X}}}_{tactile} = 5 6. 1 9\, \pm \, 1 9. 9 5- F_{(1,17)} = 1. 9 2 1,\,\,p = 0. 1 8 6] \), Satifaction \( \left[ {{\bar{\text{X}}}_{visual} = 6 7. 50\, \pm \, 1 3. 4 2,\,{\bar{\text{X}}}_{tactile} = 5 8. 70\, \pm \, 20. 3 9- F_{(1,17)} = 1.0 7 1,\,\,p = 0. 3 1 7} \right] \), Safety \( \left[ {{\bar{\text{X}}}_{visual} = { 61}. 2 5\, \pm \, 1 8.0 8,\,{\bar{\text{X}}}_{tactile} = 5 5. 5 6\, \pm \, 2 3. 5 1 { }{-}F_{(1,17)} = \, 0. 30 7,\,\,p = 0. 5 8 8} \right] \).

5.8 Discussion

Results and Ecological Validity: While the participants did not find the MI-BCI training easier or more satisfying with the tactile feedback (cf. user-experience questionnaire), results suggest that continuous tactile feedback can significantly improve users’ MI-BCI performances as compared to an equivalent visual feedback (same timing & update rate), in an interactive context. We believe that testing these equivalent feedbacks in a multitasking context increases the ecological validity of the results. Branching Effect: Results suggest that vibrotactile feedback can support branching tasks better than visual feedback in this context. Learning Effect: Results reveal a learning effect for the MI tasks for both feedback modalities. It seems that both feedback types are equally effective for the investigated task.

6 Discussion

Our study suggests that it is possible to provide MI-BCI users with a relevant continuous vibrotactile feedback while they are performing MI tasks, and that this tactile feedback can improve BCI control reliability in a multitasking context (as compared to an equivalent visual feedback). It suggests that providing feedback through another modality than the visual one, but with the same content has advantages: it tends to improve users’ BCI control, frees the visual channel and thus some cognitive resources to perform other tasks. For some BCI users, it may be difficult to pay attention to both the visual feedback and the MI task. Naive BCI users often report this issue informally. This difficulty is drastically increased when the MI has to be performed in a multitasking environment (e.g., a game). Indeed, in this kind of environment, the visual channel is overtaxed de facto. Thus, providing a visual feedback concerning the MI-BCI tasks being performed in addition to environment related information forces the user to split his attention and use more cognitive resources. Besides, receiving a continuous tactile feedback consistent with the motor imagery tasks being performed is probably more natural and intuitive than a visual feedback.

Furthermore, from an MI-BCI based application point of view, our study showed that a continuous tactile feedback enabled the user to be better at performing multitasking. Indeed, most interactive applications (e.g., games) rely heavily on the visual modality, and users are often asked to split their visual attention between different tasks and events. As such, our study suggests that further increasing the visual workload by adding visual BCI feedback is actually detrimental to the BCI reliability. Yet, although BCI-based gaming has been rather extensively studied [14], all the BCI/game studies used only the visual modality. Our study suggests that continuous vibrotactile feedback is more appropriate. In the future, we could imagine designing BCI based game control pads that can provide tactile feedback. Since real-life contains a lot of visual distractions (like it is the case in video games), we could expect that our tactile feedback improves BCI performances even outside a gaming context.

Overall, tactile feedback for BCI has been mostly studied so far with discrete feedback and targeted at patients. Only a few studies explored continuous tactile feedback for BCIs [5, 9, 15]. Results suggested that, at best, tactile feedback was as good as visual feedback [5] for MI-BCI control. In our study, when using a continuous tactile feedback with the same content as the visual one, and in a multitasking context, a different picture emerged: tactile feedback seems to improve user performance, both for MI-BCI control and side tasks (counting the distracters). However, in our study, tactile feedback was continuously updated (at 4 Hz) but not spatially continuous. This is due to the technical difficulty of providing a good motion illusion to the users. One study [5] addressed this point and obtained encouraging results using motion illusion for MI-BCI feedback. This difficulty added to the fact that we could only embed 5 motors per glove explains why we had to divide the classifier output into 10 intervals, thus provide a non-spatially continuous feedback. Overtaking this technical issue would lead to spatially and temporally continuous feedback, which would increase the level of precision. However, no study has been conducted yet in order to determine which level of precision was associated with the best MI-BCI performance. One could argue that a good precision can allow users to increase their performance, but also that too much information could increase the workload, and thus decrease performance.

To summarise, our study reinforces the idea that tactile feedback combined with MI-BCI has potential to enrich a wide range of interactive applications for the general public, such as gaming. However, using tactile feedback for interactive applications in general, and gaming settings in particular, requires the designer’s attention and creativity to use it.

7 Conclusion and Future Work

Our results showed that it is possible to provide MI-BCI users with an intuitive and efficient continuously updated vibrotactile feedback. A first user study allowed us to determine the parameters of this tactile feedback:

  • Tactile feedback location: we chose the hand palms for their high spatial accuracy and the consistency with the MI tasks (left- and right-hand movement) [24].

  • Tactile feedback update rate: we used a 4 Hz feedback update rate so that each feedback is well perceived by the user [7].

  • Pattern of vibration: our first user study suggested that a tactile feedback based on localised stimulation (one motor at the time) is more pleasant and distinguishable than simultaneous vibrations.

  • Intensity of vibration: our first study suggested that vibration intensities between 0.3G (40 Hz) and 0.5G (60 Hz) were best: lower intensities did not allow users to perceive the feedback clearly, whereas higher intensities were uncomfortable.

This tactile feedback was associated with better MI-BCI performances and better scores at the counting task than visual feedback in a multitasking context, thus suggesting that it could be an effective means to support users for a wide range of interactive applications.

In the future, different elements should be considered in order to increase the validity of this study. First, more participants should be included. Moreover, as long-term use of continuous tactile feedback could result in a palm desensitisation and thus a decrease of performance, It would be important to determine when the feedback is useful or not so that performance can be optimised. Finally, in this study, only the feedback form is discussed. Yet, much work has to be done on feedback content so that it can be really relevant. Among others, it should be explanative, supportive and meaningful [16].