Keywords

1 Introduction

1.1 Motivation

Future long duration spaceflight missions will require new education, training methods, and tools to assist procedure execution. Skills which are currently taught before flight will need to be provided as just-in-time training due to expanding mission requirements [4]. Deep space missions, such as those to an asteroid or Mars, will introduce a time-delay in communications that will require an increase in crew autonomy from the current ground based mission control architecture [7]. We have designed and prototyped a mobile procedure viewer using augmented reality (AR) with the goals of increasing crew autonomy, decreasing training time, and reducing procedure execution errors. The prototype integrates with a future space habitat outfitted with an Internet of Things (IoT) sensor network. These emerging technologies can be used to integrate and display information about the astronauts’ current task, the state of the habitat, the location of their tools, and more, to the astronauts in situ, as well as provide real-time error detection and supervision that is traditionally fulfilled by mission control [2, 3].

The training requirements for future missions are currently unknown and will require extensive research to adequately define for operational use. NASA’s Human Research Program (HRP) has identified key research gaps essential to successful human exploration beyond low Earth orbit. Among these gaps are the need to “identify effective methods and tools that can be used to train for long-duration, long-distance space missions [8]”. Deep space missions can take advantage of long transit times to train astronauts onboard spacecraft. Skills that are currently taught on the ground before a mission can be taught in transit or supplied as just-in-time training when required. HRP has an additional research gap to “develop guidelines for effective onboard training systems that provide training traditionally assumed for pre-flight [9]”. Traditional pre-flight training tasks typically require a demonstration of the task by an expert or supervision to ensure no errors are made. For future onboard training systems, we have found that our augmented reality/IoT procedure assistant can provide these roles. Through user testing conducted at NASA Ames Research Center, we show that our prototype can effectively train new users to a procedure.

1.2 Related Work

Research investigating the effects of virtual and augmented reality training has previously shown improved performance compared to conventional types of training. This research in virtual and augmented reality training has primarily focused on surgery, assembly and maintenance skills. All three of these tasks are analogous to the types of tasks astronauts are required to accomplish during the completion of procedures. Surgical training has seen particular focus by researchers due to the inherent high-risk nature of the task, as well as the expense and limited time of expert instructors [5, 6, 11, 12].

The use of virtual reality (VR) to simulate surgical tasks was first proposed by Satava in the early 1990s [11]. Satava used an off-the-shelf VR head mounted display (HMD) and a “DataGlove”, which essentially acted as a joystick, for the user to interact with the scene. Satava described five areas that must be met to provide a realistic simulation:

  • Fidelity: the graphics must have an acceptable level of resolution for the task

  • Object properties: the objects in the scene must behave with sufficient reality

  • Interactivity: the user must be able to interact with the virtual scene

  • Sensory input: the user must receive appropriate sensory feedback

  • Reactivity: the objects must behave appropriately when the user interacts with them

At that point in time, computers were only capable of meeting these standards at the most basic level. Despite this, Satava noted recent rapid advances in computing power, and that VR training could be particularly useful “in this era of animal-rights sensitivity and of fear of exposure to blood-borne diseases such as AIDS and hepatitis [11]”.

Less than ten years later, Seymour et al. showed that virtual reality training could improve operating room performance [12]. Seymour et al. presented the results of a double-blind study demonstrating that “virtual reality training transfers technical skills to the operating room (OR) environment [12]”. In the study, surgical residents were split into either a non-VR-trained control group, or VR-trained group and trained to perform laparoscopic cholecystectomy. The authors showed that, while overall task performance time did not significantly decrease, the VR-trained group made significantly fewer errors. This result indicated that students could be trained to perform better without any risk to patients, and “validated the transfer of training skills from VR to OR sets the stage for more sophisticated uses of VR in assessment, training, error reduction, and certification of surgeons [12]”.

Around the same time, Boud et al. found significantly improved assembly times for subjects who used VR or AR training over conventional 2D engineering drawings [1]. In their task, subjects had to assemble a water pump after receiving instructions on paper “conventional” drawings, in VR, or in AR. Subjects in the VR and AR conditions wore HMDs to see their environment. Subjects performed significantly better in both VR and AR than when trained with the conventional drawings. Additionally, subjects who received training with the AR system performed significantly better than the subjects trained with VR.

More recently, Webel et al. developed an augmented reality training platform for assembly and maintenance skills [13]. The authors combined an augmented reality video aid with a vibrotactile bracelet to assist with augmented reality training. The video aid was displayed on a tablet computer that combined predefined augmented reality cues with a video feed of the real world. The bracelet had six vibration segments, which could be activated independently, allowing for both translational and rotational “channels” that guide the user. Webel et al. ran a study that grouped subjects into two groups to determine if training with the AR system was more effective than traditional training. To measure the effects of the training, the authors investigated both the task completion time and the number of “unsolved errors”. The authors found that overall task completion times were not significantly different between the control and AR groups, but that the AR group had significantly less unsolved errors. This result is consistent with other studies using VR, including the above-mentioned Seymour study [12].

This research has shown that measuring human performance and the effects of training can be challenging. Even in experiments with seemingly improved and subjectively preferred training techniques, the results of training rarely reflect performance changes. While overall task completion times generally do not change as a result of training in VR or AR, the number of task errors has been shown to be decreased [11, 12]. Until recently, computational limits of the hardware used in these VR and AR experiments have reduced their effectiveness. Recent advances in hardware allow for fully mobile, head mounted augmented reality solutions which may ultimately prove to be more useful.

2 Technical Description

Modern spacecraft procedures are typically viewed on paper or computer tablets during spaceflight (see Fig. 1). While procedures can be viewed on tablets, they are static and essentially no more than digital paper. We have designed and created a prototype system that would target future missions from both a technical and human computer interaction (HCI) perspective (see Fig. 2). We have developed software and hardware to support this aim, with the goal of creating guidelines for an improved user experience. Our work is developed first from the “Internet of Things” to determine what a future space habitat will be able to know about itself and the human residing in it. Compared to a computer vision approach, IoT allows us to have each object broadcast information about itself (e.g., Door A is open) or where it is (e.g., Module C is installed in Rack 4).

Fig. 1.
figure 1

Astronaut Lee Archambault, STS-119 commander, looks over procedures during an International Space Station assembly mission (S119-E-006141).

The prototype presented here makes use of state-of-the-art technology. While it is not expected that current technology will be used for future missions, upcoming hardware should have a smaller form factor, better computational power, and longer battery life. More importantly, this prototype can be used to provide essential insight to design future HCI requirements. With this in mind, this section outlines the two main hardware components used for this prototype and the communication technique used to network them.

Fig. 2.
figure 2

User’s view of procedure with cardboard mockup of science payload hardware. An animation of the current step guides the user, while procedure text is placed above the hardware.

2.1 Augmented Reality Headset

The prototype takes advantage of the Microsoft HoloLens to display information to the user. The HoloLens is an augmented reality headset capable of displaying semi-transparent “holograms” to the wearer. These holograms can be fixed to the user’s viewpoint or to some aspect of the environment. This allows for fixed-placed holograms to provide relevant information near a specific hardware or location, as well as procedure instructions which stay in view and follow the user’s movements. Users can interact with the prototype holograms by using voice commands, hand gestures, or a combination of the two. The voice commands, which take advantage of the HoloLens’ voice recognition system, can be particularly useful when the user is already using both hands and still needs to interact with the prototype.

A number of alternate augmented reality devices were considered before the beginning of this work. In particular, there exist several augmented reality options for phones and tablets. While these devices are also capable of providing similar augmented reality experiences, they require either:

  • The user to mount or hold a device in place

  • The use of markers for the device’s camera to locate the appropriate location for augmented information

In contrast to this, the HoloLens can be used while both hands are occupied, and it uses computer vision to identify features already present in the environment to allow for location based holograms.

One limitation of the HoloLens, however, is a small field of view. While this decreases the number of options available for displaying information to the user, it can also be used to focus the user to specific areas of interest. Additionally, as information can be displayed relative to user’s head, essential information never needs to leave the field of view.

2.2 Internet of Things (IoT) Sensors

We developed individual IoT sensors based on an ESP8266 chipset. The battery powered chips could be mounted onto every tool or object in the space habitat, providing information on location, orientation, movement, or any other sensors. In our prototype, each chip is outfitted with an accelerometer and LED and communicates back and forth to a central server.

As well as information from their own sensors, we exploit the WiFi transceiver to provide us with a rough estimate of proximity between objects. The ESP8266 chips were placed in a dual WiFi mode, connecting to one WiFi network for information transfer and advertising their own access point. This was performed so that we could obtain the signal strength of each chip relative to one another, as the chips are only able to obtain RSSI (Relative Signal Strength Indication) from an advertising packet. Each chip can then continuously perform a scan of WiFi networks and report the RSSI between every chip in the environment.

Fig. 3.
figure 3

Wristband for tool and object proximity detection via WiFi signal strength.

We created a special configuration of the IoT sensor to wear on the wrist of the user to determine when an object is picked up (see Fig. 3). With the information streaming of RSSI between the wristband and other IoT sensors, we can detect proximity between the user and a tool or object. The server combines this proximity information with the accelerometer data from the IoT-equipped object to perform pick-up detection. Pick-up detection occurs when an object reports a spike in acceleration and has strong radio signals from the wristband. We have found this to provide a reliable method for determining which object the user has picked up and is interacting with.

3 Prototype

Our work on developing a prototype was divided into two main parts. The first part focused on guiding the astronaut to a tool or location in the space station (Path Visualization). The second part was to provide assistance and supervision to an astronaut performing a typical procedure on board the station (Procedure Execution). Our use of the term assistance here describes any visual, audio, or other cues to help the astronaut complete a step in the procedure. Supervision refers to our ability to monitor the astronaut’s performance of the procedure in situ using IoT sensors, allowing the system to catch or avoid any errors performed. Technical limitations and time pressures led us to simulate some of the interactions (“Wizard of Oz” prototyping technique). For both parts, we went through a standard user centered design and rapid prototyping process of brainstorm — storyboard — prototype — user test design cycle. We present here our final prototype for each part of the project, followed by a summary of the findings of the prototyping and user testing to guide future developments.

3.1 Path Visualization

The focus of this section of the project was to create a guidance system that would help astronauts find their desired tool or destination on the space station. Lost and misplaced items on the space station waste a significant amount of valuable crew time, and items replaced incorrectly can cause hazards for the astronauts. We simulated the task of locating a missing tool by requiring users to retrieve tools from another room in the building. The guidance led users to the room and then, within the room, guided them to the precise location of the tool. This broke the task down into two parts: guidance to the room (through hallways or other rooms) and guidance within the room (to the storage location of the tool).

To begin the task, the users first navigated a user interface in the HoloLens to select which tools they wanted to locate. The UI screen could be navigated via native HoloLens gestures or voice (see Fig. 4). After selecting which tools to find, the users were provided guidance to the room with the tools. This was accomplished with a hovering 3D rectangular line (at approximately chest level) with chevrons indicating direction, as well as additional context placed directly on the line such as the room number and tool they were collecting (see Fig. 5). Upon entering the room, the user saw a holographic line which included an arrow pointing towards the location of the tool. After picking up the IoT outfitted tool, the pickup detection technique allowed the augmented reality display to react automatically. This allowed the prototype to provide guidance navigation to the next tool or the return path without any needed input from the user.

3.2 Procedure Execution

The goal of the Procedure Execution part of the project was to provide astronauts with assistance on how to complete a procedure beyond their current text-based instructions. For our simulation, we simplified a procedure from an existing International Space Station procedure for the rodent habitat scientific payload. The goal of the procedure was to transfer the rodents from their transport habitat on a visiting spacecraft to the rodent habitat onboard the International Space Station. To ensure the safety of the crew, the procedure requires the astronauts to configure the hardware in various ways to ensure that the rodents are not released into the space station. The payload habitat hardware was simulated using a cardboard prototype.

The prototype guided the user through the procedure by providing a holographic animation for each step and a UI for displaying the text of the step. For example, one of the first steps was to open an access door. During this step, the prototype provided a holographic animation overlay of the door opening on top of the actual hardware. After the user completed a step the next step would be presented, both in the text and animation, without any required input from the user. This ensured that the user knew that they installed an item correctly before moving on to the next step in the procedure. This was useful for complex steps involving many actions, as time could be saved by synthesizing several steps into one cohesive animation.

4 User Testing Results

4.1 Warm up

Our user testing exercises were initially broken up into the two parts: path visualization and procedure execution. After the first round of user tests, however, we found that participants needed more experience interacting with holograms and paper prototypes. We created a new exercise, which we called “warm up”, to familiarize participants with using our selectors, pinching hand gestures, and voice commands to select items. The warm up consisted of a smart phone and a paper-prototyped music player interface. We asked participants to use both voice and hand gestures to pause, play, and skip songs as we “Wizard of Oz’d” the actual playing and pausing of music on a cell phone behind them. This warm up exercise was successful in familiarizing participants with our selectors and interacting with the holograms, an entirely new interaction to most users. We found that participants said they preferred voice commands over hand gestures when selecting items but tended to use both. One participant noted “Voice is more comfortable because I wasn’t sure about the depth of my gestures”. Once the procedure began, however, participants switched to only using hand gestures to select options within the interface.

Fig. 4.
figure 4

A user wearing the HoloLens and making a gesture.

4.2 Arrow Spacing (Path Visualization)

Our next insight came during the path visualization portion of the prototype, as we wanted to guide users to their desired tool through holographic arrows and a guiding line. We found that using a constant arrow spacing alongside a linear path gave users an easy guidance system to locate their desired tool. Through testing, we found that there should be a chevron arrow indicating direction every fifteen feet, as well as at every corner. At this distance, the user would always have at least one arrow in their field of view without being too cluttered and obstructive. One participant told us “The arrows were positioned just right so I knew exactly where to turn”. This user feedback, coupled with our understanding of the HoloLens’ field of view limitations, indicated that the spacing was appropriate for the task and AR hardware.

Fig. 5.
figure 5

A user’s view of the (a) holographic arrows and (b) line representing a path during a testing session.

Fig. 6.
figure 6

Users were able to identify and acquire objects in a busy office environment during user testing sessions with the use of paper prototypes.

4.3 Precise Location of Tool

Through our user testing, we sought to answer how to enable a user to find a tool in a precise location, such as a small hatch or compartment. This was especially challenging when the tool was lost or outside the HoloLens’ field of view. We found that a single line on the floor leading to a tool’s exact location was a successful method of marking the destination, as users could follow the line to the precise location (see Fig. 6). We arrived at this insight after observing many users’ inability to locate the tool when it was outside the field of view. By using the eye to follow the holographic line, users could easily see which drawer the tool was in, decreasing cognitive load.

4.4 Summary

After each user test we interviewed the user to ask questions about some of the decisions they made and why. This allowed us to determine the leverage points and ways we might modify the next iteration to improve the user experience and efficiency of the prototype. For each new iteration, we conducted five qualitative user tests to gather sufficient data before we made changes to the prototype [10]. Through multiple iterations of user testing and prototypes, we developed insights and best practices that reduced cognitive load and saved time for our users. We feel that augmented reality and embedded IoT sensors are an effective pairing of tools to increase productivity and reduce procedure execution time.

5 Conclusions

User testing at NASA Ames led to several best practices and guidelines for augmented reality assisted procedure execution. We found the use of a paper prototype “augmented reality” warm up exercise to be useful, as most users have no experience or context interacting with augmented reality. This allowed us to both train users in how to interact with augmented reality and quickly iterate through designs. The ability to prototype augmented reality software using paper prototypes avoided the need for lengthy software development. The results of these paper prototypes translated directly into the final augmented reality solution.

In addition to the use of the warm up exercise, we also investigated several techniques to guide users to a destination and locate a tool. We found that the use of holograms arrows floating along a guidance path at fifteen foot intervals was useful for providing directional context to users. This spacing allowed for effective guidance, as users could always find the next arrow “marker” to continue their path to their destination location. Once users were sufficiently close to their destination, we found that providing a semi-transparent line was a non-intrusive way to provide guidance towards a specific item or tool.

It can be challenging to describe relative orientation of mechanical parts for maintenance or assembly tasks. The use of holograms to provide 3D translation and rotation information was found to be especially useful compared to traditional, text-based procedural steps. As the HoloLens can be operated hands free, users can continue a procedure while in the middle of complex assembly. When the holograms were aligned and overlaid with real world parts, the 3D animation removed ambiguity regarding the procedure. Users could watch and follow along with the animation in real-time, without any need to interpret written procedures.

We have presented the development of an augmented reality and IoT prototype for just-in-time training. The HoloLens’ capability to persist location-specific holograms provided feedback to the user which could not have been provided by tablet or phone-based AR solutions. The use of a wristband sensor allowed for the prototype to sense the user’s proximity to relevant objects, and it helped to confirm when objects had been picked up. Combined with sensor data provided by the IoT devices, our prototype enabled novice users to complete our procedure correctly, without any instructor guidance.