1 Introduction

Firefighters face immense stress and physical trauma when responding to a call. Physical dangers include burns, extreme heat, high noise levels, smoke inhalation, heavy equipment, and building collapse. Active structure fires are disorienting, largely unknown, and highly dangerous. Firefighters enter a building with less than 15 min of air in their tank and the knowledge that, in the right conditions, a fire can spread faster than they can put it out. In these harsh conditions, the radios that firefighters rely on for communication often fall behind their pace of work as the scene demands increasingly more of their attention and they cease to have a hand free to operate their radio mouthpiece. Systems deployed in these environments need to be adequately vetted prior to testing in working fires and must be fully accepted and trusted by firefighters in order to be adopted into use.

The research group used the Human-Centered process to research and design a tool to enhance communication for firefighters on-scene. This process optimizes the traditional development process by bringing a “fail-fast” mentality to support rapid prototyping. Rather than focusing on fully developing a single idea to high fidelity, the human-centered design process makes rapid progress on multiple ideas toward a viable product by quickly eliminating flawed concepts. This is especially important for systems performing in dangerous scenarios, where failure in a single dimension can be enough to render a product useless.

This paper presents a case study involving the design and testing of multi-modal feedback supporting spatial orientation and task completion. Specifically, the project focused on supporting the situational awareness of firefighters within a burning building. Situational awareness is defined as each firefighter’s general awareness of the past and current physical location of themselves and other actors on the scene. By focusing on the specific context of victim search and rescue within a multi-story building, the team built a case-specific model using Unity3D to simulate a burning building. This model provided a platform to quickly prototype and measure the efficacy of feedback methods across the visual, auditory, and haptic channels. Furthermore, the system housing this model was mobile, enabling researchers to go test prototypes with first responders to ensure solutions could realistically fit into the rescue workflow.

2 Design Constraints of Target Environment

Research revealed emergency responders operate in three tiers: oversight or management, supply and personnel coordination, and frontline response. This project focused on the front-line responders. Thirty-one interviews were conducted with police officers, firefighters and EMS responders. The driving insights from research were as follows: Firstly, the limitations of radio tend to block useful information and can actually increase the cognitive load on users as radio channels get busy. Secondly, as responders use the radio with less frequency, their current understanding of the factors in the scene, get synced up less frequently and communication breakdowns compound. This is a critical flaw, since teamwork and communication is crucial to mission success and maintenance personal well-being in these situations. Finally, the nature of fire response is extremely time-sensitive; depending on conditions, fires can double in size anywhere from every few minutes to every few seconds.

During work shifts, responders must be on constant alert. If there is an emergency, they have to leave within minutes to the scene. Responders operate in small team when on the response site; firefighters are always on shift with a team of four. Responders develop close ties to their teammates. This closeness is often essential in a response scenario where being in sync with each others’ actions is elemental to coordinated responses. Experience working with one another increases each team member’s ability to understand and predict the actions of other members (Fig. 1).

Fig. 1.
figure 1

A firefighter in full gear exits a fire. Firefighters often wear over 100 lb of gear and must work fires in brief shifts to avoid exhaustion.

When a responder goes into a scene, he only receives limited information from dispatch. Upon arrival to a scene to rescue a trapped victim, he may only know the number of floors the structure has. Upon entering the building, his vision is completely blocked by smoke. The search for victims is conducted blindly and, in higher temperatures, on hands and knees. Firefighters work in pairs, keeping one hand constantly on the wall or their partner while sweeping the floor with their other hand or one of the tools they have with them. Firefighters must also take in the location of doors and furniture to maintain their orientation. They must also keep track of time to ensure they don’t run out of oxygen and clear the building before it becomes structurally unsound. If a firefighter is able to find the victim, he must now blindly recall his way out of a potentially collapsing building.

In order to respond to a scene, firefighters often have to carry heavy equipment such as oxygen tanks or medical devices. While on a scene, firefighters often have their hands occupied by tools such as a hook or axe. When considering new solutions for responders, it is critical for designers to be mindful of the physical limitations of what firefighters can carry in addition to their gear. Furthermore, responders are highly focused on the tasks they have to perform, so tools must be straightforward and require little extra physical or cognitive effort to use.

Research aimed to identify necessary tasks and information for firefighters to successfully complete their tasks. In early prototyping phases, researchers returned to fire stations at multiple points to validate that the information their concepts aimed to present was not only helpful but also acceptable to the target audience. An important discovery from these prototype validation sessions was that firefighters are willing to adopt technology that supplements their knowledge of the environment but resist technology that prescribes or suggests actions or decisions. Firefighters trust their own experience more than an algorithm and reject concepts that attempt to replace their decision-making abilities.

3 Our Solution

In order to address the constraints of the environment, researchers created quick augmented reality (AR) product concepts, which they rapidly prototyped and tested with users. Researchers then built a virtual reality (VR) environment to test concepts in concert with each other and the dynamics of emergency situations. The VR environment supported 3 tasks: Blind search, directed search, and exiting the building (Fig. 2).

Fig. 2.
figure 2

A cut-out view of the testing environment.

VR Testing Environment.

The environment involved both visual and audio factors to simulate real-world noise and stressors of a burning building, as captured by research. One of the major drawbacks of this environment was the tendency of immersive virtual reality environments to cause motion sickness in participants. In order to accommodate this, researchers limited the length of testing sessions to under 10 min and had users take frequent breaks.

The environment was a multi-story residential building. The rooms in the building contained furniture so that the simulation contained a realistic level of way finding. Interviews with firefighters revealed that they would map out the rooms in a building using the furniture they contained. In order to add noise to the visual channel, the rooms would fill up with smoke and visibility would decrease as the fire persisted. In order to add noise to the audio channel, the simulated fire made realistic sound and researchers piped radio chatter into the simulation.

A timing element was implemented in order to add stress to the task. The interface displayed a countdown clock that indicated when their oxygen would run out. As time went on, the size of the fire and the amount of smoke increased, adding urgency to the user’s experience and hindering their ability to see and complete their task. Additionally, users were required to pay attention to and respond to certain cues from the radio clips.

The VR prototyping and testing environment was constructed using Unity3D and Oculus Rift. The elements of the Unity environment facilitated an iterative process wherein elements of the AR prototypes and the testing environment can be adjusted and changed quickly and independently. A virtual environment by nature captures performance metrics in extremely controlled and repeatable conditions, so designs could be compared using measures of the speed and accuracy of task completion.

Prototype.

The prototype supported testing of 2 AR tools: a heads-up display (HUD) in combination with 3D audio (3DA) components. The HUD displays a layer of visual information to the user on top of the environment. 3DA uses stereo audio to mimic sounds originating from specific locations in 360° around the user. The prototypes tested how this additional information might be provided in a given environment as well as how it could assist a user in completing a simulated task. Prototypes were constructed to be modular and interchangeable for a variety of testing configurations. Users were also able to configure their own AR display combinations in later trials to create customized AR experiences.

Task: Blind Search.

In order to support blind search, one prototype displayed a visual “tail” to indicate where the user had already searched. Researchers designed this interface to prevent disorientation or duplicate work. Initial testing revealed that haptic channels became overloaded and users reported becoming desensitized to and ignoring those cues. Early rounds of testing using haptic and audio prototypes indicated that the visual channel was the best fit for this information. Research showed users were confused when they received their search history over the audio channel, and this confusion persisted despite training sessions. The visual channel excelled in supporting this task once users voiced a desire to know the age of the trail. Testing revealed that color changes were the best way to display the age of the trail (Fig. 3).

Fig. 3.
figure 3

A screen capture from an initial prototype displaying the visual search history and the oxygen countdown (Color figure online).

Task: Directed Search.

The next portion of the task involved a directed search for a fallen firefighter. Both the HUD and a 3D Audio tool proved usable for AR in this task. To assist directed search, the HUD displayed an arrow that pointed the user in the direction of the fallen teammate. An important distinction here was that the arrow was pointing in the absolute direction of the target and was not providing turn-by-turn directions through the building. The audio version of this AR used two different methods to guide the user toward their teammate. One method adjusted the repetition rate of a ping to be more rapid as the user was oriented toward the target. Since users wore an Oculus, the interface would pick up on their head orientation. This enabled users to turn their head, hear changes in the ping rate, and check that they were headed in the right direction. The second method adjusted the pitch of the ping to be higher as users approached the target and lower as they got further away. The prototype was designed to support swapping the pairings of repetition rate and pitch between distance and direction, which allowed users to customize their interface for this task according to what was most intuitive to them.

Task: Exit the Building.

During initial concept validation, firefighters expressed concern that over-reliance on any additional technology would cause disorientation if the tech were to fail. The final task was designed to test user performance given AR device removal. Once users had found the victim, augmented reality displays were removed and users were required to navigate their way out of the building without any additional tech. This task measured whether the users were relying too heavily on the technology to still maintain a working knowledge of their spatial orientation. Ideally, AR will augment a user’s ability to complete a task without becoming a crutch.

4 User-Centered Methods

Domain Research and Synthesis.

Researchers began with an extensive literature review to gather domain knowledge of augmented cognition and firefighters. Researchers then performed structured interviews with 26 target end-users, domain experts, subject matter experts, and stakeholders.

After using interviews to gather domain knowledge, researchers gathered real world information through contextual design methods [1]. Researchers participated in 6 ride-alongs with target users to observe their workflow during emergency situations and attended a training session run by a fire fighter in order to understand the mindset, mental models, and rationale of emergency response.

Research methods had to be modified in order to accommodate the extremely dangerous conditions that users encounter. Traditional contextual design methods require closely following users during their tasks, but for safety concerns, the target user workspace did not support traditional contextual inquiry. In order to accommodate these dangers, researchers used a combination of observation and directed storytelling to have users recount a specific work experience. Researchers went to observe an emergency response and took note of which firefighter teams were there. Over the next 24 h, researchers were able to contact the firefighters observed on-scene and have them recount the experience while it was fresh in their memory. This allowed the research team to ask specific questions and get realistic answers from the target users without endangering themselves or distracting the firefighters from their dangerous work.

To synthesize findings, researchers performed a journey mapping exercise. Researchers took this map to the firefighters who were at the scene to have them give feedback, which revealed distinct phases of emergency response: pre-arrival, arrival, and scene response. Journey maps plot the flow of information and responsibility during task completion. This journey map focused on the experience of a single firefighting team. Researchers decided to focus on the scene-response phase, as the journey map revealed it exhibited the most communication breakdowns. This phase is characterized by triage, coordination, self-preservation, and high states of stress and physical exertion. Researchers decided to focus on the on-scene responders as target users, as tech already exists for scene overseers.

Initial Prototyping.

In the first round of synthesis, researchers mapped out necessary on-scene data per firefighter feedback during interviews. This data was gridded in a matrix against situations in which users indicated they would have wanted more information. 17 of these data-situation pairings were translated into storyboards to procure user feedback. Storyboards are small, illustrated stories that describe a problem and a potential solution. They allow researchers to quickly and clearly communicate their concepts to target users to get feedback on the impact and feasibility of the concept. Research shows that presenting rapid concepts like this to users generates valuable and applicable design feedback [2]. Storyboard feedback directs researchers to probe how the tech would be accepted on the market. Feedback from storyboards allowed researchers to tighten the scope of their final design. Users were open to receiving additional information while on the scene, but they did not want a device suggest action – they did not trust an algorithm to make the right decision or to replace their “gut” instincts. Users were open to a tool that provided additional information to enhance their own experience and decision-making abilities.

Fig. 4.
figure 4

Two team members test out an early audio prototype to perform a directed search task

Fig. 5.
figure 5

A test user performs a search and rescue task in an early version of the digital prototype using the visual history trail HUD.

Fig. 6.
figure 6

Two members of the research team test a concept using a pre-existing digital environment to validate that haptic feedback can be incorporated into a digital interaction.

Once target scenarios were identified through the storyboarding exercise, researchers brainstormed the kinds of information displays that could improve the situation. Using a fail-fast mentality, the research team generated quick-and-dirty prototypes for each product concept to test whether or not each of these concepts would be interpreted clearly and improve the decision-making abilities of the user. By user-testing each prototype as early in development as possible, the research team ensured that the majority of development time was spent building a useful tool. These mid-level prototypes allowed the research team to hone the final design (Figs. 4, 5 and 6).

5 Conclusion

Emergency response scenes are tense, uncertain, and constantly evolving. Firefighting is characterized by the management of many unknown factors, forcing firefighters to constantly anticipate what could go wrong. Each firefighter needs to keep track of how a scene could evolve and to make the best split-second decision possible. Technology could assist firefighters in keeping track of their scene and reducing their cognitive load.

Firefighters are open to adopting new technology, as they are frustrated with the radio as the main on-scene communication device. Entering into an emergency situation with missing information, conflicting goals, and environmental pressure is no small task. Communication should be an asset in these situations, not a liability. Building a solution for firefighters would make a meaningful impact on public safety and could be applicable beyond the emergency response space.

The digital solution presented in this paper explores the value of delivering location information to firefighters. Team location is a critical piece of information for responders, enabling them to coordinate their actions, ensure the safety of themselves and one another, and speed rescue efforts. Location information is difficult to communicate over the radio, and layering audio and visual information over the responder’s perception could assist in search and rescue tasks.

By using user-centered research and design methods, a team of human-computer interaction researchers quickly gained a deep understanding of the experience and challenges first responders face when responding to an emergency. They used this understanding to generate a multiple ideas for solutions, which they then represented as storyboards and prototypes and tested with their users. This let them quickly an inexpensively identify the concepts that users would find most useful. These concepts were built into prototypes of increasing fidelity, tested and iterated upon.

The researchers were also able to incorporate their background user research into the construction of a virtual reality testing environment. This let them test their prototypes in a realistic yet controlled environment in which they could gather rich data. The ease of creating prototypes in a digital environment also supported rapid prototyping and iteration. This, combined with a commitment to test with the end users throughout the process, ensured the delivery of a highly usable (product/prototype) at low cost and within a short timeframe.