Keywords

1 Introduction

Human interface improvements in the presentation, understanding, and collaboration for C2 systems can achieve better situational awareness, decision making and effective interaction. Human-autonomy teaming requires tri-collaboration of human-to-human-to-machine as humans collaborate with each other and autonomy simultaneously. Additionally, a new concept in shared visualization is created where multiple classification levels exist within the same virtual space. A shared virtual environment with means to filter levels of classification or function has wide application yet serious concerns of information spillage from one classification level to the next. The pros and cons of such a multi-level classification approach was well suited for an augmented reality-based interface.

The C2 system selected for evaluation of the human-machine interface was the Intelligent Multi-UxV Planner with Adaptive Collaborative/Control Technologies (IMPACT) [1]. IMPACT was built to demonstrate agility in tactical decision making, mission management, and control with key elements for enabling heterogeneous unmanned vehicle (UxV) teams to successfully manage the “fog of war” with its inherent complex, ambiguous, and time-challenged conditions. Applied research for IMPACT was based on supervisory control and the machine learning of tactics that combine flexible play-calling, bi-directional human-autonomy interaction, “global” cooperative control algorithms, and “local” adaptive/reactive capability.

The Microsoft HoloLens is an Augmented Reality device which has a see-through lens to the real world upon which computer graphics are overlaid. The operator of this device can interact via gaze, gesture and speech while visualizing the virtual C2 based scene in any defined presentation space. This new modality of user interface provides the premise for the investigation of the effectiveness of such human-machine interface as add-ons or replacement for existing C2 human-machine interfaces.

2 Background

2.1 Mixed Reality in the Military

Advances in mixed reality has grown remarkably in the past few years due to the commercialization of the technology. The concepts behind the technology have been around for quite some time such as the Virtual Fixtures Platform [2] developed at the U.S. Air Force Armstrong Laboratory in the early 1990s. SSC Pacific and Naval Postgraduate School were involved with virtual reality for a project called CommandVu [3,4,5]. CommandVu was utilized for Marine Corp platoon mission development and training within a virtual environment and a mixed-live fire demonstration at 29 Palms, California. More recently the Battlespace Exploitation of Mixed Reality (BEMR) Lab [6] has been evaluating, integrating and exploiting commercial mixed reality technology for adaptation into a virtual world’s battlespace. The goal is to reduce associated costs and risks when bringing the mixed reality technology and capability to the warfighter. The intent of the goal is to increase effectiveness, efficiency, collaboration, innovation, battlespace visualization and speed to response and pace of evolution in decisions making and situational awareness.

2.2 Microsoft HoloLens

The Microsoft HoloLens is a commercial-off-the-shelf, self-contained holographic computer which enables engagement with digital content and interaction with holograms superimposed on the real world [7]. The device is an adjustable-fit headset that uses a visor to display virtual, augmented, and mixed realities (VAMR) to the end user. The headset consists of a self-contained Windows 10 computer system with multiple sensors for advanced optics and holographic computing. A special holographic processing unit (HPU) was designed specifically for VAMR.

Interaction with the holograms displayed through the HoloLens is based on gaze, gesture, and voice [8]. The hologram elements are overlaid as computer graphics onto the view of the real world. The collaboration aspect allows multiple users to have synchronous shared experiences for presentation, collaboration, and interaction, while at the same time, the view of real world surroundings is maintained to allow spatial orientation within the environment and visual ques from users in the same room from their own perspective of the shared environment. This collaborative environment is considered an innovation in the realm of a more natural human interface.

2.3 IMPACT

The IMPACT system is a C2 prototype platform for centralized supervised control of multiple simulated autonomous unmanned vehicles [1, 9, 10]. Research related to IMPACT was to demonstrate how a single human can provide supervisory control of many unmanned vehicles through the fusion of several autonomous agents and the autonomy associated with the unmanned vehicles working in concert to achieve missions in an uncertain and changing environment. The underlying goal of the IMPACT initiative was to invert the applied ratio of human operators to autonomous vehicles [11]. The operator manages missions through a “play calling” approach where the operator is supported by autonomous agents in performing these tasks. This concept, designed and implemented with a tri-service set of research teams from the Air Force Research Lab (AFRL), SPAWAR Systems Center (SSC) Pacific, Army Research Lab (ARL) and Naval Research Lab (NRL).

3 Technical Approach

C2-CST application connects the IMPACT system with one to many augmented reality display systems. The virtual environment represents the information in the IMPACT system as a virtual sand table. The IMPACT system contains a component called the HUB, which permits the exchange and flow of data via standardized formats thus allowing external applications the ability to communicate with IMAPCT. A computer placed in between the HoloLens and IMPACT runs a Unity3D application along with the Photon server which functions as the middle tier and permits the communication brokering between IMPACT and the HoloLens.

This section covers the architecture, environment, scene creation, communications via ZeroMQ, the Photon Unity Networking server, the Mixed Reality Toolkit for Shared Experiences, the classification filtering of visualized data, and the simplified Play Workbook interface to the C2 IMPACT system. The C2-CST project provides a start into a new and rich area in human user interfaces for Command and Control systems. The design for the prototype and experiment is extensible and scalable.

3.1 Architecture

Most visualization-based systems generally do not have sufficient computing power required for both the processing of C2 related information and providing a real-time visualization and human interaction of that C2 information. With this in mind, the architecture was comprised of three core components, which as designed and implemented provide the necessary support to achieve the underlying goals and overcome the obstacles for such a system. These three core components are:

  • The IMPACT system, which was designed to reside on one to many computer systems and provide many services such as vehicular autonomy, supervisory control, planning algorithms, path determination, resource allocation, unmanned systems sensor control, and a simulated environment to provide test scenarios.

  • The Unity workstation is a separate computer system with the Unity3D toolset and Unity Photon Server installed. This workstation via the Unity interface acts as the communications mediator between the IMPACT system and the HoloLens devices. This communications mediation and added computational capability, offloads the HoloLens devices and allows them to work best as visualization and interaction tools of the C2 information in the virtual sand table.

  • The HoloLens devices and their associated APIs and frameworks support the new approach in visualizing and interacting with C2 information and other users embedded within the same holographic environment.

The C2-CST architecture permits the processing of C2 information and the updating of that information to be handled by more powerful computing system than that provided by the HoloLens. The HoloLens can therefore focus on visualization and interfacing to the C2 data that gets represented as a holographic advanced interface.

Fig. 1.
figure 1

C2-CST architecture diagram

The C2-CST system communicates bilaterally with the IMPACT system using the ZeroMQ network communications framework that connects to the IMPACT centralized HUB component [12]. With the HUB communications pipeline based on ZeroMQ, the logical choice for interchange of information to C2-CST was to utilize ZeroMQ as the networking communications protocol. ZeroMQ is an asynchronous messaging library which can be utilized for concurrent applications as a messaging library based on sockets while providing a message queue. ZeroMQ can also run without a dedicated message broker. This allows for a many-to-many connection amongst the connection endpoints.

The Unity workstation is the second core component to C2-CST architecture. The component provides a middle tier connection between IMPACT and the HoloLens interface. The Unity workstation uses Photon Unity Networking (PUN) as a server which is a third-party package for Unity3D developed by Exit Games [13] for multiplayer games. The Photo Unity Server has a load balancing API which matches “players” to shared sessions and transfers messages synchronously in real-time between these connected “players” (users). The “players” in the case of C2-CST can be considered to be each of the individuals using a HoloLens device that shares the same holographic environment. The Unity workstation with the Photon Unity Networking adds the support for the shared environment which essential given the intended collaborative use of the C2-CST system (Fig. 1).

3.2 Environment - Scene Creation

The military expression of a “sand table” usually refers to a terrain model used in support of military planning and wargaming. This definition essentially applies to the C2-CST system, but more specifically to C2 planning and execution (Fig. 2).

Fig. 2.
figure 2

C2-CST sand table display within the HoloLens

The HoloLens holographic environment utilizes the Unity3D development tool to support the development of the interface as well as building the environment for the holographic scene. For IMPACT, the designated default scenario is a base force protection mission over an assumed Air Force base and surrounding region. The map within IMPACT was created for the HoloLens holographic scene in Unity3D. The air base in the scenario is fictitious in its representation for both IMPACT and C2-CST but provides a very suitable and realistic representation for the simulated environment. Within Unity3D a third-party terrain tool builder called World Composer [14], provided the means to generate to scale the 3D realistic terrain for the C2-CST “sand table” display.

3.3 C2-CST Communications

The communication flow between IMPACT and the C2-CST system must also pass through the Unity workstation which acts as a mediator and supports the computational offload for the HoloLens devices (Fig. 3).

Fig. 3.
figure 3

C2-CST connections and communications design

C2-CST communications utilize the ZeroMQ asynchronous messaging library which is an open-sourced network communications framework for distributed messaging [12]. IMPACT’s centralized HUB component distributes all its messages to each component connected to the HUB. The two messaging patterns used throughout the entirety of IMPACT, and by extension, to C2-CST are:

  • Pub-Sub: The first messaging pattern is the Publish-Subscription model, allowing the HUB to publish every message it receives to a port. All services and components subscribe to the HUB’s IP and port to receive messages and filter the desired messages to read based on the header information.

  • Push-Pull: The second messaging pattern is the Push-Pull model, which allows all the services to push messages to the HUB’s IP and port to be distributed to each end user. The HUB receives these messages by pulling from its port and publishes the message out.

3.4 Shared Experiences via Mixed Reality Toolkit

The Mixed Reality Toolkit is an open-source collection of scripts and components intended to accelerate development of applications targeting the Microsoft HoloLens and Windows Mixed Reality headsets [15]. The Mixed Reality Toolkit was developed by the Microsoft Corporation and is necessary for the C2-CST application to have both collaboration and sharing capabilities. The stacked layer consists of the C2-CST application on top of the Mixed Reality Toolkit for Unity, which in turn rests on top of the Windows 10 UWP (Mixed Reality APIs) (Fig. 4).

Fig. 4.
figure 4

Microsoft HoloLens stacked layer API

This allows multiple HoloLens users to communicate and stay in sync seamlessly in real time. This communication can occur within the same location or with users in remote locations. When the first HoloLens joins the Unity workstation scenario via the Photon Unity Server it utilizes the ‘anchor’ for configuration and registration to the virtual sand table. Thus, as each subsequent HoloLens joins the session it is synchronized to the holographic environment with that anchor and allows them to share the exact same virtual experience.

3.5 Classification Filtering of a Shared Visualization

The classification filtering of a shared visualization is an extension to shared experiences based on the Mixed Reality Toolkit. In the basic shared experience all HoloLens users in that session share the exact same scene or scenario and the same set of information is available to each user in that session. Classification filtering changes from the globally shared set of information to various subsets of non-shared information. The goal for C2-CST was to determine if such an environment could be useful and still protect against the spillage of classified information from one level to the next.

The C2-CST system handled the classification levels by filtering data between each HoloLens headset via a distinct configuration. This configuration is set on the Unity workstation and when the scenario is run and a HoloLens connects, that particular HoloLens receives a specific filtering configuration. This is displayed in the filtering toolbar. In a real-world system such configuration would be restricted and purposely assigned based on the end user’s classification. For this development effort, the Unity workstation user can select a different filtering level for each specific HoloLens that joins the session. In this design there are five filters:

(1) Complete information access and control (2) Ground, air, and sea vehicle information access (3) Ground and air vehicle information access (4) Ground vehicle access (5) No access.

A simplistic approach was taken to classification filtering which leaves a lot of room for research, exploration and experimentation in classification filtering.

3.6 Simplified IMPACT Play Workbook

The complete IMPACT system allows for a plethora of “plays” to be called for various tasks such as Point/Area/Line inspect, Shadow Hostile, or Overwatch. A play workbook was created in IMPACT which contains all the developed plays for a set of autonomous vehicles to create an intended action that correlates with the mission plan or an adaptation to that mission plan. This is a conditional action based on the current state of events and actions taking place along with the intended mission and operational/tactical goals. From the play workbook a type of play is selected and is generally followed by a target location and its intended target or goal. IMPACT application-level autonomy supports the determination of solution sets for accomplishing the specified play (Fig. 5).

Fig. 5.
figure 5

The full capabilities of a play within the IMPACT system

In the C2-CST version of play-calling, HoloLens users have a very simplified play workbook. Instead of selecting the type of play and then the location, the HoloLens users gaze is set to the intended location for the play task to be accomplished, then an air tap gesture is used to bring up the simplified workbook for selection of the desired play. The initial design was primarily intended for point inspect plays consistent with this simplified interaction for play calling (Fig. 6).

Fig. 6.
figure 6

Simplified play calling in C2-CST only allows for point inspects across different vehicle types

The simplified play calling capability within the C2-CST interface is a prime example of how the IMPACT interface could be extended as mixed reality interface. As the scenario and environment are shared and users collaborate on intended courses of actions, the C2-CST holographic user interface allows not only this collaboration but the execution of decisions from that collaboration via play calling. The C2 interface changes from a single user, single interface to a multi-user shared interface where collaborative decisions and actions can take place. The intended human evaluation of C2-CST was also designed to discover the value of execution of play calling from a HoloLens especially in a shared virtual environment.

4 Conclusion and Future Work

The C2-CST project provided an opportunity to examine a new user interface for C2 systems. C2-CST leveraged the Microsoft HoloLens API and associated capabilities to provide a unique user interface based on current state of the art COTS mixed reality technology. The IMPACT application based on human supervisory control of multiple unmanned autonomous systems and collaboration with in-application autonomous agents was utilized as the baseline C2 system for the evaluation. Comparison of the HoloLens mixed reality interface to that of the nominal IMPACT interface provided the fundamental components of the study for human motor processing, perception and cognition. The HoloLens device offered a new paradigm in user interface of mixed reality using gaze, gesture and voice with a unique holographic environment overlaid on the user’s view of the real-world surroundings.

C2-CST created unique features of a shared collaborative holographic environment as well as permitting a filtered classification on information displayed in that environment. C2-CST leveraged several capabilities to permit an immersed collaborative environment. These capabilities are also possible for remote users.

The following describes areas of the C2-CST project that could use further research, development and experimentation:

  • Voice-based Interaction: The IMPACT system allows automated task generation and play calling through a chat box interface as well as some operator-to-operator communications over the chat box interface. A voice-based interface was not included into the C2-CST project goals, non-the-less, the HoloLens also has a speech recognition and speech command-based API to allow this additional user interface action to occur. The middle tier of C2-CST could process and translate HoloLens voice input to an acceptable format to go across the IMPACT HUB and be processed in IMPACT. The reversal of this voice communication is also possible. Voice could play a significant role in user interface improvement to advanced user interfaces for C2 systems and as such should be considered in future efforts.

  • Comprehensive Human Subject Matter Testing: For any new or revised operational or tactical application, subject matter experts and end user feedback is essential to understanding the level of utility and improvement need for existing or systems still in the early design stages. The human evaluation feedback for new advanced user interfaces has been on overreaching purpose of C2-CST.

  • Remote Collaboration: Remote collaboration is built in as part of the HoloLens system. This proves very beneficial in C2 systems in cases where collaboration regarding C2 information for mission planning or execution may come from disparate locations such as a ship at sea and a land-based command center. HoloLens provides the means to allow this remote collaboration via a networked connection. Within that environment the remote participants appear as virtual avatars. These HoloLens capabilities are ideal for C2 related remote collaborations.