Keywords

1 Introduction

The interaction between human’s brains and machines is an emerging area in computer science. Bringing aspects of Human-centered computing into this area, especially where Brain-Computer Interfaces (BCI) is used as a mechanism for control, adds a hands-free dimension to computing. The research presented in this paper is the first study ever demonstrating able-bodied users painting with their brains in an immersive Virtual Reality (VR) environment as a medium for creative expression. The immersive brain painting environment allows users the ability to paint in a virtual canvas without requiring physical movement [1,2,3].

The art of creative expression is considered to be until this day a purely human ability and skill. Art has taken many forms like sculpting and painting. We are introducing a new mechanism for interactive method of creating art using only the Brain in an immerse environment. Brain Painting has shown to improve the quality of life (QOL) of patients with ALS, by giving the patients ways of expressing themselves and affecting society through art exhibitions [1]. Although there is currently no known cure for ALS, through such outlets, we can help mitigate the physical and psychological impairments of those living with ALS.

In order to understand if our novel VR Brain Painting application elicits a more positive experience, we tested our system on participants without ALS. This allowed us to validate, as well as find improvements on our system before running trials directly with the ALS community. Although further research is needed to fully validate the effect of immersion during the healthy patient trials, these trials accelerate the research process, due to the larger participant pool, and lead to more insights into aspects of user experience that both populations share. If the user experience and its usability is good, it is expected that the participants will be able to express themselves naturally through their brain paintings. Since the focus in BCI research has mostly been on the reliability of applications, no standardized methods to assess the user experience for BCIs exist at the moment. Therefore, in this study we use standardized questionnaires adapted in Affective Computing and Human-Computer Interaction research to collect data about our participant’s experience. During this study, we show that Brain Painting in an immersive virtual environment can help provide an experience that makes the users feel more “present” [6] while they express themselves creatively and this leads to a more positive user experience. We do this by comparing their emotional state given by the Positive and Negative Affect Schedule (PANAS) [4], which gives a score for positive and negative affect between 10 and 50 points, before and after using traditional Brain Painting and VR Brain Painting. We also measure their cognitive workload with the NASA Task Load Index (TLX) [5] and the felt presence through the Presence Questionnaire [6] to determine how present they felt while using the application. We believe that the immersive VR Brain Painting will allow the user to feel more present using the application and have a better experience measured by the PANAS survey.

2 Related Work

The world’s first Brain–computer interfaces (BCIs) that enabled creative expression in paralyzed patients were first introduced by a group at the University of Tübingen [1]. They investigated the efficacy and user-friendliness of P300-based Brain Painting, which is an application developed to paint pictures using brain activity only. The application used two screens for the painter: one screen displayed the P300 matrix while another screen showed the painting canvas. The standard P300 speller matrix, proposed by Donchin et al., was adapted to contain symbols indicating different colors, objects, grid sizes, object sizes, transparency, zoom and cursor movement. They showed the usability of P300s in painting applications and the qualitative results of peoples’ experiences. Their user study with three ALS and 10 healthy participants got an average accuracy of over 70% and bit transfer rate of 4.41 bits/min during the brain painting trials, which reflects the accuracies and transfer rates in modern P300 applications. The surveys given in the study measured each participants motivation and mood prior to the study, but no comparative measure before and after were given to measure the benefits of using the device.

This study was followed by an extended version of the first Brain Painting with a much greater selection and longer study [3]. They conducted a more extensive user study of 2 ALS patients during 27 home use days over 3.5 months to demonstrate how P300-based Brain Painting could be integrated into their everyday life to promote their QOL. Since their results are based on only 2 end-users, they could not be generalized to the population of potential users, but their achievement of high satisfaction with their participants demonstrates the benefit of adopting user-centered design in BCI development. This was a long-term study and they used the QUEST2.0 [7] to along with a self-reported satisfaction score between 1 and 10 to assess the satisfaction and usability of the device over the course of the study. As our experiment is measuring the direct effects of one session of using Brain Painting, we decided to use a before and after PANAS surveys over self-reported satisfaction scores and since our study is over healthy participants the devices are not considered assistive, so we left out the QUEST2.0 survey. Although Brain Painting in Virtual Reality has not been explored yet, there have been many other attempts to bring BCI into virtual reality through conventional methods like Motor Imagery, P300, SSVEP, and even Hybrid P300/SSVEP with much success [8,9,10,11]. They have been used in applications for navigation [11], object control [10], and even movement [9] due to recent technological advancement in VR headsets and the design of BCI devices. Surprisingly, none of the new advancements have been utilized in the Brain Painting space, but in this research, we will explore the first glimpse of P300 Brain Painting in Virtual Reality.

Our approach is different to past studies in 2 significant ways: (1) this research focuses on the direct effects of Brain Painting on a person’s affective state and (2) is the first evaluation of Brain Painting in an immersive VR environment with a VR headset.

3 Methodology

3.1 Study Design

A within-subject study was performed to evaluate the participant’s experiences according to their changes in their affective state during the use of Brain Painting, and to test if the new environment affected the participants’ measured cognitive workload (NASA TLX) or felt presence (Presence Questionnaire), along with their experience.

A total of eight participants took part in this study. They were all students at the University of South Florida, Tampa, Florida, USA with an age range from 18–31. All of them mentioned that they have used a BCI before. This information was recorded by the pre-experiment survey which asks questions regarding demographics, experience with brain-computer interface devices, and asked for a pre-experiment PANAS survey to measure the current affective state of the participant before the study. This PANAS survey is used to compare the change at the end of both the immersive and non-immersive task. The participants were also asked to fill out surveys about their experience during the task, which included: the post-task PANAS survey, NASA TLX, and Presence Questionnaire.

Study Procedure

Each study session lasted approximately two hours and consisted of 8 parts: (1) pre-experiment surveys, (2) setup and training session of the first application, (3) testing of the first application, (4) post-survey of the first application (5) training session of the second application, (6) testing of the second application, (7) post-survey of the second application, and finally, (8) post-experiment surveys.

After the pre-experiment surveys, the next part of the study consisted of a mounting session where the participant was asked to mount the G.Tec Nautilus device and Oculus Rift. After mounting the devices, they were asked to train the P300 by focusing on randomly flashing symbols on the screen to generate the EEG data which we used to train a Linear Discriminant Analysis (LDA) classifier. After a series of 10, 12 row flashes (chosen based on previous studies, with training time in mind [10]), the times when the elements that contained the chosen symbols were flashed, P300 event-related potentials (ERP) are elicited in the user. The event-related potentials (ERPs0 were then trained to be detected by the LDA classifier, which then transfers them into commands for the paint utility. After the training was completed (approximately 30 min), the participant went through a short familiarization session in which they are asked to make any five selections from the painting interface using the P300 stimuli. After the pre-testing phase, the participant was given a session of 10 more commands to use the application. Their task was to use the Brain Painting application to draw whatever they wanted. This process to gauge the effects of using the application in an uncontrolled recreational setting is similar to other studies where the session is done with the ALS participants at home [1, 2]. Each participant tested both the immersive and non-immersive version of the application and they were split equally into two groups: one group used the non-immersive application first, then the immersive. The other used the immersive application first, then the non-immersive. This was done to reduce bias in the results. At the end of each task the participants were asked to fill out surveys about their experience.

3.2 System Design

For our brain-painting control interface, we use 16 dry-electrode channels with the G Tec Nautilus electroencephalography (EEG) device. We chose this device because it is lightweight, wireless, does not require electrode gel, and can collect high quality EEG data. These features allow for easy integration with the Oculus Rift head mounted display, which is used to project the canvas and brush to the user while painting. Electrodes were placed at positions Cz, CPz, P1, P3, P5, P7, Pz, P2, P4, P6, P8, PO3, PO7, POz, PO6, and PO4 according to the 10–20 international system. These positions are located on the parietal lobe which is the most prominent for P300 [12].

For this study, two different applications were made (Fig. 1). One of applications has the non-immersive interface, which consists of the paint canvas and symbol matrix of visual stimuli on the computer monitor, without a VR headset. The other has an immersive interface which also consists of the paint canvas and symbol matrix of visual stimuli, but these are now displayed within Oculus Rift VR Headset that the user is wearing.

Fig. 1.
figure 1

Brain Painting Interface – A user wearing the G. Tec Nautilus EEG headset performing the 2 tasks. On the left side, the non-immersive interface (a) and symbol matrix of visual stimuli (c) are shown. On the right side, the immersive interface (b) and symbol matrix of visual stimuli (d) are shown with the user wearing an Oculus Rift VR Headset.

User Interface

The P300-based brain-computer interface for abstract painting in an immersive VR environment was developed using the Unity Game Engine. The immersive environment is divided into two components comprising the total field of view seen on the Oculus Rift VR headset. The first component is used to incorporate the visual stimulus. This is done by displaying a 6-by-6 grid containing symbols that represent actions in a painting utility (i.e. movement, changing color, or switching brushes). The second component consists of a canvas where the painting takes place. A cursor, placed on the canvas, responds to a selected option from the Visual Stimulus panel. These include moving in ten directions (up, down, left, right, and all diagonals) in order to draw lines connecting the cursor’s current position to its next position on the canvas. Other features of the application allow for printing shapes on the canvas, changing the color of the lines and shapes, and size of the brush (Fig. 2).

Fig. 2.
figure 2

Immersive P300 Brain Painting User Interface designed for Oculus Rift

Control Interface

The control interface of our application was achieved via the use of OpenVibe, a software platform used to design, develop, and test brain-computer interfaces. We used OpenVibe as a mechanism to acquire and classify EEG data coming from the G.Tec Nautilus, as well as an application driver to manage the synchronization between our user interface in Unity and data acquisition from the G.Tec. As a base for the acquisition, training and the online task of our application, we used the built-in P300 Speller within OpenVibe. The application was extended by creating a virtual environment in Unity on top of the existing OpenVibe application and established a communication link between our user interface and OpenVibe via the LabStreamingLayer (LSL).

Figure 3 is the OpenVibe online task scenario. Identical to the filtering process done when training the LDA classifier. The online task reads and filters data coming from the acquisition client using channels 1-16 (in the 1–20 Hz frequency band with a fourth-order digital Butterworth filter). The data is then decimated by a factor of five and cut into signal segments, averaging over the epochs. After the data is preprocessed, it is passed into the classifier trained from the acquisition step. The result of the classification is passed into Unity along with the interface variables via the LSL Blocks. The interface variables include information about the state of the P300 Speller. Our user interface in Unity, using the LSL4Unity library (https://github.com/xfleckx/LSL4Unity), receives the LSL data and then mirrors the instance of the P300 Speller on OpenVibe.

Fig. 3.
figure 3

OpenVibe Online Task Scenario – This contains the coding blocks used to gather and classify data during the 3D Brain painting online task. (The template is similar to: http://openvibe.inria.fr/openvibe-p300-speller/).

4 Results and Discussion

Data was collected by a total of 8 participants. Therefore, individual data will be reported descriptively (Tables 1, 2, and 3). Our post-task PANAS (Table 1) showed that 7 of the 8 participants had a higher positive score after using the immersive Brain Painting interface over the non-immersive interface and 6 of the 8 had higher or equal negative score when using the non-immersive Brain Painting interface. The pre-PANAS score was higher than the post-PANAS. One of the reasons the why the score decreased after the tasks might be because of fatigue. The training and testing time took over an hour for both tasks and this could lead to boredom and irritation. Also having a small sample size can lead to the large variation in the results.

Table 1. Average PANAS affect scores after immersive and non-immersive tasks.
Table 2. Average Score for NASA TLX after immersive and non-immersive tasks.
Table 3. Average score for presence questionnaire after immersive and non-immersive tasks.

The NASA TLX (Table 2) showed a mean measured workload of 49.25 (Standard Error (SE) 5.84) and 50.5 (SE 4.90) over all participants performing the task in non-immersive and immersive Brain Painting, respectively. In contrast to the difference seen in the positive affect of the immersive environment measured in the PANAS surveys, the workload showed to be the same between tasks. The similarity between the measured cognitive workload of the tasks can be attributed to the alike interfaces and mechanisms for selecting a command.

Our post-task results for the categories measured in the Presence Questionnaire were slightly higher in the immersive task than the non-immersive task in almost all categories (Table 3). Also, six out of the eight participants said they would use the brain painting device recreationally, showing their enthusiasm towards the novel system.

5 Conclusion and Future Work

This paper discussed a novel method to allow users to paint with their brain in an immersive virtual environment and its early user testing. This paper is meant to be the first step towards the study of immersive environments in brain painting applications from an HCI perspective. This work was done through surveys that measure the affective state of participants before and after the task, measure cognitive workload and felt presence when evaluating the experiences of the participants. We wanted to see what aspects of the participants experience improved with the immersive interface. The proposed system did not seem to improve the measured cognitive workload. This is probably due to the complex control aspects of the interfaces. Nevertheless, users felt more present during the use immersive environment than in the non-immersive interface. Although this is only the pilot work, this application has great potential to improve the quality of life of patients with ALS and could give a new mechanism for the physically impaired to express themselves without physical movement.