Keywords

1 Introduction

A Brain-Computer Interface (BCI) measures the central nervous system (CNS) activity, which translates into an artificial output that replaces, restores, enhances, supplements or improves the natural CNS output [1]. Traditionally BCI has been primarily for clinical research in which focus on the development of applications on assistive technologies for people with disabilities. In recent years, there has been an increase of interest in BCI research in the HCI community. The main focus is to design, develop, and evaluate BCI applications for healthy users to assist them with their daily life. Therefore, there has been discussions on the importance on the use of User Experience evaluations towards BCI research. By utilizing UX evaluation, these applications can be improved and adapted to the users needs and preferences. Cooperative Brain-Robot Interaction (cBRI) can benefit from UX user’s data. BRI consists of studying how humans interact with robots (physical and simulated) via cognitive (non-muscular) communications. cBRI is the study of how two users or more can collaborate to control machines cognitively. Similar to entertainment applications UX evaluation is needed for cBRI to investigate how users feel while controlling robots. Previous cBRI work focuses mainly on objective data collected during studies. Although this approach assists with understanding a system’s functionality, it may exclude important information on how users perceive the system’s performance. In addition, UX evaluation could have an impact on the system’s performance [2].

This paper explores how UX evaluation and Affective measurement can be beneficial towards BRI research. It explains how concepts such as interviews and questionnaires can contribute towards the enhancement of BRI systems. However, Interviews and questionnaires could fail to report useful information at times because sometimes participants may be unaware of how they truly feel. Therefore, neurophysiological measurements (affective analysis) may be useful to assist with interpreting how users feel while performing BRI tasks. Neurophysiological measurements provide physiological objective data that can be used to gage user’s emotional state. This paper focuses on interpreting engagement data to support both the robot and UX provided data. Engagement information has shown to be beneficial in recent BCI studies [3, 4]. Users that are more engaged could be more focused on BRI tasks. In result engagement levels could relate to situational awareness. Analyzing engagement data along side other measurements could provide insight into how affective data can be used to supplement research similar to this work in the future. Furthermore, this paper discusses how robotic objective, subjective, and affective data may be used together to provide a more holistic evaluation of the UX of cBRI. An example of evaluating single versus cooperative BRI is included in this paper to show the usefulness of using affective, subjective and objective data together. The last sections of this paper provide recommendations for evaluating UX of BRI applications in the future.

2 Related Works

Recently there have been efforts to investigate how multiple users interact and/or perform when using BCI devices together. Much of this work is aimed towards healthy users. Although no UX evaluation for BRI has been done to date, previous research suggest that cooperative brain control could cause less fatigue and cognitive load in comparison to solo brain control [5]. Much of the work in this area has been done in the area of gaming. Nijholt and Gürkök surveyed research on multi-user brain-computer applications [6]. In particular they looked into gaming applications that incorporated a multi-brain setup. After investigating various existing multi-brain applications the authors concluded that even though there is still much work to be done in this research area, multiparty brain gaming has potential to provide challenging, engaging and enjoyable experience for players. This work also goes on to mention some cooperative control research that took place recently. Although they mentioned a system similar to full cooperative control of a robot, there is no evidence that this topic has been thoroughly investigated. Hjelm et al. provided an early example of multi-brain interaction [7]. In their work two players controlled the ball through their state of relaxation. The objective of the game was to place the ball in the opponent goalmouth. This research showed an example of how one can compete and relax at the same time. Gürkök et al. researched muti-player BCI UX [8]. In this work, UX was reported via observational analysis of social interaction. This information was gathered while a pair of players played a collaborative BCI game. To get more information on how users viewed BCI control, participants compared BCI with using a mouse to complete the same tasks. According to the article users collaborated less when using the BCI in efforts to try to keep control of the robot. It was concluded that the UX of games that use BCI for direct input is dependent on advances in classical BCI. Bonnet et al. evaluated a multi-player BCI game both in a collaborative and competitive mode [9]. In this study users attempted to move a ball to either the left or right side of the screen. This literature provides a comparison of solo and multiplayer motor imagery based BCI gaming. Eckstein et al. worked on research that investigated whether a collection of brain signals cold together make better decisions [10]. In this work a collection of twenty humans made perceptual decisions together via brain signals. Obbink et al. looked into the social interaction aspect of multi-brain BCI gaming [11]. According to their report, though users felt they collaborated better using a point and click device, there was also some promising results for the use of multi-brain BCI. Nijholt et al. investigated multi-party social interaction [12]. This literature discusses how it is important to research the ways healthy BCI users can use BCI technology collaboratively. Poli et al. discusses how cooperative BCI can be used to assist with space navigation [13]. They reported that results from cooperative control were statistically significantly better than solo control.

There is multiple work on using BCI to control robots. Although existent in other areas of BCI there has been little investigation of the user’s overall experience while performing BRI tasks. Most of the previous work mentioned above relied only on task performance data when evaluating BRI systems. This paper discusses a novel concept of using subjective UX data, neurophysiological measurements, and objective data as a metric of success.

3 UX Impact on BRI

Currently there is a limited amount of research on BRI systems with a focus on HCI. There is even less work or none on the study of implementing UX evaluations to BRI systems. There are multiple factors contributing to this issue. One factor is that these types of systems are still mostly in a proof of concept phase [15]. As a result, the research has taken place mainly in labs that concentrate mostly on optimizing system performance. These projects mostly focus on detection, performance and speed. This research has assisted with progressing BCI, but as the fields of BCI and HCI begin to merge new findings about the impact of UX on BRI are being discovered. An example of this was shown in recent studies that suggest there is a relation between motivation and task performance [16]. This finding was a result of only a few studies. More focus on evaluating UX of BRI could lead to even more discoveries that could enhance future BRI applications. Additionally, BCI for control has been a key topic in the medical domain form the beginning. With these implementations, system performance weighs very heavily on practitioners, which often results in UX evaluation being overlooked or ignored. One possible reason causing this issue is that participants in the target population may not have the capacity to provide efficient subjective data. A possible solution to this problem is to utilize methods used in other areas to assess UX for non-healthy users. The introduction of off-the-shelf non-invasive BCI devices has enabled researchers and developers outside of the medical domain to work in the field. This has resulted in more applications targeting healthy users. The UX of these new applications matters greatly to the target population. BCI studies that have investigated UX, mention user acceptance, system performance, and user enjoyment as the main reasons evaluating UX is vital [2]. User acceptance can play a key role in BRI systems. Users who are frustrated with a system’s design could perform cognitive task poorly due to system level issues. One BCI study reported that users performed worse both with positive bias and negative bias when giving inaccurate feedback [17]. To address this issue a user-centered design process that involves users iteratively throughout the process should be used. This in result could increase user-acceptance and reduce factors that could hinder UX. To assess user-acceptance, tools such as the System Usability Scale (SUS) and NASA Task Load Index (TLX) can be used. More specifically, these tools can be used to assess usability and cognitive workload respectively.

Many current BCI systems for healthy users are marketed as entertainment applications. User enjoyment is crucial for these applications to be successful. UX evaluation provides an opportunity to assess this through various methods. One method of doing this is through surveys. The Game Experience Questionnaire (GEQ) is an example of a tool that has been used previously to gain insight on users’ level of enjoyment with BCI applications [18]. Based off the trends with-in BCI, BRI could become popular for entertainment purposes. Considering this possibility, addressing issues regarding UX impact on BRI will be important for future applications. Currently there are many limitations of using BCI for control, but investigating the impact of UX could provide clues to ways to addresses some of these limitations. Although UX is not commonly assessed during BRI studies, previous research suggest that UX can influence objective performance measures in these kinds of system [2].

For example, by collecting subjective can one can gain insights about possible confounding factors, such as the BCI device being uncomfortable, therefore distracting the user. Other possible distraction could have occurred such as the user being unclear about directions or uncomfortable with the experimenter mounting the device. These are just some examples of latent issues that might come out only if researchers investigate the more subjective side of their experiment.

Collecting UX data is important to determine user acceptance, system performance, and user enjoyment, but it can also be used to help further validate the objective data.

4 Approach

To investigate the use of affective data and objective data with BRI task evaluation, a simulation environment was developed. Both solo and cooperative control was tested using this environment. While the robot was being cognitively controlled, affective data was collected to measure engagement levels. This data was then used to gain insights on the relationship between neurophysiological measurements and objective performance measures.

4.1 Non-invasive Emotiv BCI Apparatus

The Emotiv EPOC non-invasive device (Fig. 1) is a wireless EEG data acquisition and processing device. It connects via Bluetooth to a computer. This device consists of 14 electrodes (AF3, AF4, F7, F3, FC5, T7, P7, O1, O2, P8, T8, FC8, F4, F8) and 2 references (P3/P4 locations) to obtain the EEG signals. These channels are based on the international 10–20 locations, which is the standard naming and positioning for EEG devices. The sampling method of the device is based on the sequential sampling with a sampling rate of 2048 Hz. This device was chosen among others for its portability and its adaptability as a wearable computing system. This device was selected for the infrastructure as it has been widely used by other HCI researchers as an input and to study the user’s state, which shows its adaptability and accuracy among different task assignments.

Fig. 1.
figure 1

EEG Emotiv EPOC device

Using the BCI begins with mounting the device. Once mounted the Emotiv Control Panel can be used to get visual feedback of the signal quality of electrodes. Green, yellow, red, and black colored electrodes represents good, fair, bad, and no signal respectively. Prior to cognitively controlling a robot in the simulation training must be complete. Training is managed in the control panel. The training phase consists of visualizing movement over a time period. In this work, push and right were the two trained commands. The push command translated into a move the forward command. The right command updated the current angle of the robot causing it to rotate clockwise. Once training is complete the robot can be moved cognitively based on the trained commands. EmoKey, another component from the Emotiv software suite was used to map cognitive commands to keystrokes. In this implementation forward commands were mapped with the ‘w’ keystroke. Right was mapped to the ‘d’ keystroke. When the simulation application in the detected these keystrokes the robot performed the corresponding action.

4.2 Simulation Design

The simulation was developed using HTML5 and JavaScript. The yellow canvas served as the main stage as shown in Fig. 2. The red block represents a top down view of the simulated robot. The black bars are obstacles in the environment. The green square in the simulation is the target. Cognitive commands sent to the robot from the BCI device moved it up or rotated it clockwise. The position of the robot is determined by the equations shown in Fig. 3. The first equation calculates the robot’s next position. The x in the equation holds the current position of the robot and the y represents the current location of the robot in the y-axis. The new x and y variables are located on the left side of the equation. The equation also includes a variable f, which controls how far the robot will travel. To maintain the same speed throughout a session this variable is set to a constant. For the x equation f is multiplied by the sin of the current angle of the robot. In the y equation f is multiplied by the cosine of the robot’s current angle. The current angle is converted to radians to retrieve the 2D coordinates. This is a well known formula commonly used in 2D simulations. When the robot receives a move forward command the x and y coordinates are updated. When the robot receives a rotate command the current angle is updated. Due to the current limitations of BCI the setup was kept simple so that the robot could be navigated throughout the simulation environment successfully.

Fig. 2.
figure 2

Simulation environment

Fig. 3.
figure 3

Equation to determine robot’s position

5 Results

Two tests were performed to collect affective and objective data while cognitively controlling a robot. The first test consisted of a solo BRI task. During this test the user was responsible for the push and rotate commands. The second test consisted of two users cooperatively controlling a robot. The two commands were divided between the users in this case. One user was responsible for the forward command and the second user controlled the robot’s rotation. In both cases the goal of the task was to move the robot to the green square shown in Fig. 2. Once a command was sent, the robot moved and afterwards stopped to wait for further commands. When the robot ran into a wall it was logged as an error. The task ended once the robot reached the green square. Task completion time and engagement levels were collected. Engagement was computed using a well-known formula shown in Fig. 4 [14].

Fig. 4.
figure 4

Engagement formula

Fig. 5.
figure 5

Solo and cooperative engagement levels

6 Discussion

Prior to completing the solo task the user trained the push and right commands. Engagement and objective data collection was synced so that user state, robot position, and commands could be analyzed together. During solo control, it took 351 s to navigate the robot to the target. The average engagement level during this task was 0.15213. As shown in Fig. 6 the amount of time for this task was greater than the cooperative task as expected. The average time the robot stayed idle during this task was 2.92 s. Figure 5 shows engagement levels for the solo task during the initial 133 s. Shown by the green line there were a few drops in engagement during this task. Although at times the engagement increased, the graph shows that engagement for the solo task on average remained less than cooperative control during the initial seconds of the task. There could be multiple reasons why this was the case. One reason may be that the user only needed to focus on self-motivated movements. During the solo tasks there was no need to collaborate, which could have also influenced the engagement levels.

Fig. 6.
figure 6

Tasks completion times

The two users for the cooperative task trained one command each. The cooperative task took 133 s to complete. The average engagement for the user controlling the forward command was 0.043891. The average engagement for the user controlling the rotate command was 0.137043. The robot stayed idle on average 1.012 s which is less than it did during the solo task. This was probably due to the fact that each user only had to worry about one command. This resulted in users being able to move the robot more often. There was a cost associated with this ability to move the robot more frequently. Robots that stay idle for long periods could have slower completion times. As shown in Fig. 7, the cooperative tasks had more errors. Errors were classified as anytime the robot collided with the black walls shown in Fig. 3. These errors probably occurred because users were unsure of each other intentions at time. Feedback in the form of a dynamic cognitive gage could be used to communicate users desired intentions. This would help avoid collisions with walls. Figure 5 shows that the cooperative forward user and solo users had similar levels of engagement. This would be expected due to the constant need for the forward command.

Fig. 7.
figure 7

Tasks errors

7 Conclusion

Although more errors occurred during the cooperative test, additional training could have reduced this. The engagement data shows that a cooperative system could result in a different experience for collaborating users. To address this issue further research should be done to investigate ways to provide equal experience for collaborating users. Cooperative cognitive systems that do not balance the experience for users could result in a degrading performance for one user, which could influence the system as a whole. Analyzing the affective data gives insight into this issue. This serves as an example of the usefulness of neurophysiological data. Going forward this approach could uncover further details about other similar systems. One key next step is to extend this work with more participants. Also subjective data will be collected to investigate the relationship between neurophysiological, subjective, and objective data. Further investigation could give more insights into how UX evaluation can benefit BRI research.