Collaborative virtual reality platform for visualizing space data and mission planning
- 641 Downloads
This paper presents the system architecture of a collaborative virtual environment in which distributed multidisciplinary teams involved in space exploration activities come together and explore areas of scientific interest of a planet for future missions. The aim is to reduce the current challenges of distributed scientific and engineering meetings that prevent the exploitation of their collaborative potential, as, at present, expertise, tools and datasets are fragmented. This paper investigates the functional characteristics of a software framework that addresses these challenges following the design science research methodology in the context of the space industry and research. An implementation of the proposed architecture and a validation process with end users, based on the execution of different use cases, are described. These use cases cover relevant aspects of real science analysis and operation, including planetary data visualization, as the system aims at being used in future European missions. This validation suggests that the system has the potential to enhance the way space scientists will conduct space science research in the future.
KeywordsDistributed meetings Design science research methodology Collaborative virtual Environments Telepresence Space mission planning Scientific data visualization
Mars has been a major topic for most space agencies around the world, gathering much of the attention and funds. However, the way in which the interested parties collaborate in mission planning and operational meetings is still far from ideal. At present, these multidisciplinary tasks are carried out by different geographically dispersed teams of varying fields of expertise (geologists, atmospheric scientists, engineers, etc.) that collaborate to obtain a particular outcome [17, 20]. This collaboration consists of several physical meetings in which a topic is discussed (e.g. landing site selection, the decision about the rover path on the surface, etc.) and the relevant data for each team is gathered before they disperse again to their original locations where their own tools are used for planning, processing and analyzing the data. During this time, the communication between teams is limited to email and videoconferences, thus hindering the collaborative exploration of challenges and potential solutions. This is mainly due to the fact that these discussions do not take place within an integrated information space that represents the true nature of the planet condition but through disjointed datasets which are in the form of images and graphs. Typically, there are no more physical interactions until the next meeting, which usually takes place several months later, hence adding delay and cost to the overall mission. Therefore, there is an urgent need to explore an appropriate platform that can support collaboration among the remote expert teams involved in space mission planning.
This need has been addressed by the European Union funded project CROSS DRIVE , with a consortium consisting of atmospheric scientists, geologists, engineers, computer scientists and industrial partners involved in International Space Station and rover operations.
This paper presents the development of a collaborative mission planning platform developed by the CROSS DRIVE consortium that allows space scientists and engineers to come together to interactively plan future missions within an immersive virtual environment. The vision that was attempted to realize in building this platform was to simulate the illusion of being “teleported” to Mars to jointly plan future missions by combining information rich 3D models of Mars with advanced immersive Virtual Reality (VR) technology. In this simulated environment, the team members will be able to meet in the same spatial and social context . In this shared context, they will be able to build a common understanding, explore scientific data available within the virtual Mars model, make critical decisions on a safer landing site, make important scientific investigations during the mission, test safe rover manipulations, etc. This paper presents the technical architecture of the virtual mission-planning platform that was built to realize this vision. Specifically, it investigates the important functional characteristics of a software framework that can support heterogeneous discipline experts to come together to conduct future mission planning exercises for Mars. The paper attempts to answer the following research question: What is the nature of a system architecture that can support collaboration among multidisciplinary teams during planning and operation meetings for space industry and research?
This paper is structured in the following way. Related work is discussed in Section 2. In Section 3, the research method and the approach followed is described. Section 4 provides an overall view of the problem, its relevance and the main research contributions. Section 5 focuses on the design and development of the system architecture, while Section 6 outlines the validation carried out during the whole project. Finally, Section 7 presents our conclusions and the future work.
2 Related work
Team meetings play an important role in planning and delivering complex projects in order to support communication among team members and coordinate parallel team activities . For that reason, Computer Supported Collaborative Work (CSCW) has been intensively investigated during the last decades . Several tools and frameworks for developing virtual environments, such as VRJuggler , COVEN , AfreeCA  and Cospaces , have been developed to explore virtual meeting environments based on distributed Virtual Reality (VR) technology. Whilst these platforms have successfully demonstrated the potential of constructing distributed platforms for creating virtual meetings for remote teams , they have not given much attention to the industry context, requirements for multi-disciplinary team interaction, task analysis and the richness of the data required for conducting appropriate team activities, especially within the context of space exploration.
Similarly, there has been much research that attempted to explore various spatial metaphors and user embodiment techniques to enhance social interaction in virtual meetings. For example, Benford and Fahlén  describes a conference table designed to show the capabilities of the spatial model of interaction (SMoI). Bowers et al.  tried to evaluate virtual meetings using conversation analysis to identify turn taking and participation limitations. Even though they used expressionless embodiments, it is concluded that they have an important role in social interaction. More recently, Martinez et al.  replicates the traditional conference room example, but this time using a model of interaction that overcomes some of the deficiencies of the SMoI. However, all these examples are about unstructured and general-purpose meetings and do not focus on structured meetings in a real industry context.
Given the importance of the user embodiments, research in telepresence technologies has tried to improve social interaction in collaborative environments . One of the approaches in that direction is the use of 3D reconstructed video for communication, creating real time avatars from several video streams . This provides a faithful representation of the user that is able to transmit appearance, attention, action and non-verbal communication .
The technology supporting most of these developments is known as Collaborative Virtual Environments (CVE). They are complex distributed systems that must face several challenges to become usable products. Examples of these challenges from the point of view of the user experience are described in , and some of them, pointed out about 18 years ago, have not been satisfactorily solved yet. To add more difficulty, building systems by gluing together components that may work as solutions to individual problems is not guaranteed to work as a compound . Therefore, building system architecture for CVEs requires especial care and attention.
CVEs usually rely on distributed architectures to provide interactive virtual environments to geographically dispersed users. However, there is no agreement in which the right architecture for these systems is. Several types of general-purpose distributed architectures have been proposed in the literature, from the classic client-server and layer-based, to the modern service-oriented and cloud computing . Collaborative applications in different fields have used some of these types of architecture. Maher et al.  describes a prototype of a system for multidisciplinary collaboration, it is basically a conceptual design tool using SecondLife and web-based extensions that allowed multiple representation of objects, ownership, etc. However, this kind of approaches (based on generic virtual world systems) is not adequate for the purpose of the current paper as immersion and advanced visualization techniques are required. Moerland et al.  describes a distributed platform for collaborative aircraft design. The functionality and tools are easily distributed using a service-oriented approach to the places where the experts in one discipline reside, and the results sent to the following tool in the procedure workflow. This contrasts with the type of meetings described for this paper, as our work is mostly exploratory and, even though meetings in CROSS DRIVE have some structure, they are not that highly structured nor follow a clear and pre-established workflow. However, the way the tools are geographically distributed facilitate the management of the services. Another example is , which in this case uses a five-layer architecture for a distributed system for risk assessment using VR. The layered architecture reduces software complexity, simplifying dependencies by grouping logically-related components in layers, similarly to the architecture described in .
We explored the use of these architectures, studying the system from different perspectives in the search of a sound solution that is explained in depth in Section 5.
3 Research method and approach
As shown in Fig. 1, Phase 1 is focused on problem identification and include guideline 2 (problem relevance) and guideline 4 (research contribution). In this initial phase, the importance of the problem is made clear by describing inherent domain challenges and proposing a potential solution approach that makes a contribution to the problem domain. After the problem has been identified, Phase 2 (artifact design and development) provides a technical solution following an engineering design and implementation process. This phase includes guideline 1 (design as an artifact) and guideline 6 (design as a search process). After the artifact that provide a solution to the problem has been developed, Phase 3 is used to evaluate it in order to demonstrate the effectiveness and completeness of the solution. Baur et al.  situates guideline 7 (communication of research) after the three phases to enable researchers to build a cumulative knowledge base for further extension and evaluation . Also in Fig. 1, guideline 5 (research rigor) emphasizes the need for rigorous methods in the construction and evaluation of the artifacts thought out the entire research process.
4 Problem identification
This section addresses the first phase of the design science methodology by describing the relevance of the problem and establishing the main objectives of the proposed solution.
4.1 Problem relevance
The introduction of this paper (Section 1) already articulated the limitations of the current team meetings involving space scientists and engineers in space mission planning. Due to the fragmented nature of the data and the simulation tools, multi-disciplinary discussions during space mission planning meetings are inefficient, introducing delays and increasing costs to current space mission programmes. Therefore, there is a need for a collaborative mission planning platform that can allow space scientists and engineers to come together to interactively plan future missions.
The solution that is being explored within this project is the creation of a collaborative virtual environment that allow distributed experts to meet within a virtual representation of Mars using immersive technologies. The virtual Mars model should be based on a semantically rich information model and should offer access to necessary intelligence as well as simulators and physical rovers to conduct various scientific and operational investigations and team discussions.
In order to elaborate the business requirements for the collaborative virtual environment, three use cases, which were based on key mission planning activities, were defined in conjunction with the scientific and engineering partners of the project, as they are the typical final users of the system. The three use cases defined in this research are 1) Landing site characterization 2) Mars atmospheric data analysis and 3) Rover target selection. After analyzing a wide range of possible scenarios, these use cases were selected since they represent a good mix of data analysis requirements, probe operations and close collaboration tasks between scientists and engineers in mission planning operations. These use cases allowed the domain experts and the computer scientist to collectively capture the challenges faced during mission planning and operational meetings and define the nature of the future mission planning environment. Furthermore, these cases were instrumental in implementing a co-creation approach to incrementally and iteratively define, develop, validate and refine the overall space mission planning platform. For the sake of avoiding unnecessarily extending the length of the paper, the following paragraphs only describe the rover target selection use case, which in fact includes and extends the functionality developed for the other use cases. The rover target selection use case was divided into two main events: scientific characterization of the rover landing area and rover path planning.
The scientific characterization of the rover landing area use case starts by engineers analyzing the orbit of the spacecraft covering the area. At this level, low resolution but full planet coverage datasets are required for the terrain representation, and the composition of the atmosphere needs to be available to be studied to explore the landing trajectory of the spacecraft. After this, the focus is moved to regional coverage, using more detailed terrain datasets used by the scientists to explore a suitable landing area on the terrain. Finally, the focus is set to local coverage, based on high resolution data, at the place where the rover is planned to land on the Mars surface. The site selected for the use cases is the Gale Crater, since a rich set of information is available for the scientists from previous missions. Once landed, the status information about the rover is requested and analyzed to get a preliminary evaluation of the capabilities of the rover with respect to its mobility and the visible areas. In order to ensure that the commands for the rover could be issued and its operations could be tested, this use case used the Mars and Moon Terrain Demonstrator (MMTD) facility located in the mission control center in one of the partners facilities (Altec). This MMTD offered a physical representation of a Mars terrain of 20x20m where prototypes of the ExoMars rover are being tested.
The rover path planning should use the simulated terrain in front of the rover, identifying both places of interest and possible hazards (soft soil areas, rocks etc.). At this point, a set of paths showing interesting features of the terrain are calculated. A selection of these paths is simulated using the virtual rover by the team and the most appropriate path from the point of view of the operational scenario is then simulated in the physical MMTD facility. The images generated by the physical rover and its telemetry data are sent back to the collaboration platform for assessment.
4.2 Requirements extracted from the use cases
System should support different types of meetings with different objectives to cover the full range of activities identified in the use cases of the project.
- System should support different types of users such as core users (Mission Director, Scientists, Engineers) as well as external experts who are invited as needed with limited access rights. It should support minimum of 8 users connected simultaneously.
Core members should be able to connect via their immersive display systems and external users via their low-cost computers.
System should provide access to a range of available data including Mars terrain and atmospheric data, rover and satellite.
System should offer range of rendering techniques such as 3D rendering, volume visualization and 2D Graphs to visualize terrain, atmosphere and simulation data.
System should offer a range of tools for annotation, measurement, data clipping and slicing within the Mars 3D environment.
System should offer simulation of the rover on Mars surface for operative sessions and connect the rover simulator to the physical rover in the MMTD facility.
System should offer user presence through virtual avatars and should allow them to navigate, interact and discuss scientific and operational matters through audio channels.
System should provide the ability to connect with simulators remotely running on high-performance computing clusters and visualize their results in the immersive environment.
4.3 Research contributions
Integration of disconnected remote sensing datasets to create an integrated 3D model of the Mars planet;
The management of level-of-detail control of the massive planet model to offer real-time interaction within an immersive distributed VR environment;
Access to remote compute services;
Tele-immersion for enhanced user presence;
Management of parallel team meetings within a single platform;
Tele-operation with the rover on the MMTD facility, etc.
5 System architecture design and development
This section describes the design and development of the collaborative virtual environment for space mission planning that fulfills the user requirements identified in the previous section. In the search of a sound solution, several options for the design of the collaborative platform were considered, as it is a complex task that requires effective system architecture to support collaboration. In general, system architecture is the conceptual model that defines the structure, behavior and views of a software system . Different set of views are typically used in order to break down the complexity of designing software systems [25, 30, 44]. The main idea behind the use of views is to restrict the attention to certain aspect of the system, ignoring others that will be addressed separately , as it is not possible to describe a complex system from just one perspective . System architecture designers are advised to first identify the set of views relevant to the system being designed [9, 26]. For this research, we first focused on the conceptual design of the system, using a set of views based on , which extends the views of the Collaboration Lifecycle Management proposed in the Collaboration Oriented Architecture (COA) framework . These views were used to cover the activities described in the project use cases as well as to further elaborate the user requirements and identify functional characteristics of the collaborative mission planning platform.
Another common approach to describing software systems is by using architectural patterns , such as the layered architecture, which uses layers or tiers to partition the concerns of the application. In our approach, the conceptual system views were mapped into a three-layer architecture (presentation, service and data) within which functional modules were defined and grouped in each layer.
5.1 Conceptual system design based on system views
5.1.1 Team members view
A summary of the team members’ profiles including roles, project responsibilities and meeting objectives
Lead the collaborative session and ensure expectations and objectives are met
Scientists and engineers
Present information about Mars
Collaborate with other scientists providing information about several topics of interest for the session
Share results and learn
Contribute with their expertise to the session, discuss and give off-line inputs about science results
The typical meetings in space planning and operation are based on a turn-taking strategy. The main actor in these meetings is the Mission Director (MD) who is acting as the chair of the meeting and giving the floor to the users so they can share their results. The Mission Director is typically located in the mission control center. The second type of users in these meetings are the scientists and engineers who will be joining the collaboration platform from their remote locations to contribute to the meetings from their own expertise. These users (MD, scientists and engineers) are considered as “core users” with high security clearance to access data and sessions as they are part of the industry consortium that are responsible for delivering the overall space mission program. These core members frequently seek advice from external scientists to interpret certain data or help them with simulation or operational planning. These external scientists enrich meetings by bringing specific knowledge to discuss a particular scientific subject. However, external experts are only exposed to a restricted amount of information and therefore require a special interface to engage with collaborative meetings using their own computers rather than a fully-fledged VR environment. As a result, the need for a 2D visual interface which makes selected set of data available to the external users was identified as another key functional requirement.
5.1.2 Workspace view
5.1.3 Meeting process view
The science meetings are designed to compare the archived datasets with data coming from simulated models. Typically, simulations are time consuming and demand computing power, hence computed on remote dedicated servers. Therefore, such simulations are conducted by the experts in their private workspaces and brought to discussions during the presentation phase of the team meetings. Similarly, the objective of the operative sessions revolves around rover operations. This includes collecting and analyzing telemetry data coming from the rover and deciding the list of tele-commands to be sent to the real rover to be executed. Once the list of tele-commands is decided, they are submitted by the MD, as this is the only user with direct access to the real rover.
5.1.4 Communication view
5.1.5 User interface view
The user requirements demanded two types of user interfaces; fully immersive VR interface for the core users (MD and scientists and engineers) and a 2D interface for the external scientific experts. The former should provide access to the complete functionality of the VR system, while the latter should provide reduced access to datasets and functionality. In order to support the fully immersive experience for the core users, the virtual environment should support display technologies such as Powerwalls, CAVEs, and HMDs with body tracking (especially head and hands) and 3D interaction devices for navigational and object interaction tasks. In our research, ray-casting  interaction technique is used in conjunction with a virtual-joystick, similar to the hand-directed movement technique described in  as a navigation technique.
The external interface is designed for common desktop PCs, providing reduced interaction with the core system. The main idea behind this is to allow the external system to be executed on a wide range of PCs without the need of high-end computers. Therefore, the external interface is based on the windows metaphor and makes use of standard keyboard and mouse interaction. The 3D models of Mars are replaced by 2D maps that can be explored in a similar way to Google Maps.
5.1.6 Activities and tools view
Data Exploration Tools: The data exploration tools were divided into two categories, terrain and atmosphere. The terrain tools allow the user to show or hide various datasets available, exaggerate the height information of the terrain for easy exploration, draw contour lines at configurable intervals, and colour-code the terrain regarding the topography (elevation, slopes, etc.). The atmosphere tools allow the user to visualize various atmospheric data using volume rendering, iso-surface visualization, data slicing and clipping, hide & show various data elements, visualization of 2D maps to illustrate simulated or measured data and altitude exaggeration for easy exploration purposes.
GIS Tools: The GIS tools allow drawing annotations on the terrain or the atmosphere using different shapes, arrows, text, ellipses and polygons during private or team exploration activities. Moreover, these tools can be used to measure distances (Euclidean or taking the topography into account).
Engineering Tools: The engineering tools provide the functionality to interact with the rover and satellite simulations, as well as to interact with the physical rover on the MMTD.
Due to the restricted access imposed on the external users, the system has to control the type of activities they could perform. In the current implementation, the tools that were made available to these users are presented in Fig. 7.
5.1.7 Information view
- Engineering data (rover and satellite):
Mars Science Laboratory and Mars Exploration Rovers (MSL/MER) NASA images (archived) taken by the NASA rovers on Mars.
MMTD images (archived and taken in “real-time”). They consist on camera images, thermal images and stereo images of the MMTD facility.
Orbits of satellites (timestamped positions) used to contextualize the rover position and the terrain and atmospheric data.
- Scientific data:
- Mars geology and geodesy:
MOLA: Mars Orbiter Laser Altimeter . Consists on digital terrain model (DTM) with low resolution but almost full planet coverage.
HRSC: Highresolution Stereo Camera  mounted on Mars Express. Consists on DTM and orthoimages of mid-level resolution and a limited coverage.
HiRISE, CTX and CRISM: High Resolution Imaging Science Experiment, Context Camera, and Compact Reconnaissance Imaging Spectrometer for Mars . These three instruments are usually operated in parallel, obtaining data that is nested. They consist on DTMs and orthoimages with higher resolution but low coverage.
SHARAD: Shallow Radar . Consists on subsurface radargram images.
- Mars atmosphere:
BGM4 : GEM-Mars global climate model output 1 year reference run. Provides 3D fields (temperature, pressure, wind, air density, dust extinction, etc.), 2D fields (surface temperature and pressure, water ice opacity, etc.) and animated vectors (winds) based on simulated data.
PFS (levels 1 and 2) : Observations based on the Planetary Fourier Spectrometer on board of Mars Express. These data can be used to generate different kinds of 3D (temperature profiles) and 2D plots (surface temperatures and aerosol opacities) based on real data observations.
Tohoku ground-based measurements . Telescope observations. Consists on 2D plots (H2O, CO2, etc.) based on observations from Earth.
5.2 System architecture
5.2.1 Presentation layer
Core User Interface Module: The core users are the participants that use the Virtual Reality facilities. This module offers an immersive user experience via stereoscopic visualization and body tracking capabilities. Once immersed, the users have access to a 3D interaction device (a flystick in current implementation) with a set of buttons to execute various tasks such as select a dataset, draw a rover path or create a landmark through a floating 3D window. This floating 3D window metaphor allows the selection and combination of the different datasets in an easy way since mapping all the actions to the flystick buttons would not be possible (see left side of the screenshot showed for the core user interface in Fig. 9).
External User Interface Module: This module offers a 2D representation of the area of interest to the remote external user and allow him/her to explore the area using a limited set of tools described in the previous section, using a 2D interface based on screen, mouse and keyboard. This module is intended to run on low end desktop or laptops and therefore the amount of data shared with this module needs to be controlled to allow real-time interaction. However, the external users share the same area of interest with the core users to carry out collaborative discussions and data exploration.
5.2.2 Service layer
Visualization services: These services provide the functionality to visualize the Mars data and allow the users to interact with the virtual environment and perform their exploration tasks. For the data visualization, this research deployed the terrain visualization framework  and VERITAS . Furthermore, the data exploration tools and GIS tools that were described under the Activities and the Tools View under Section 5.1.6 were integrated into these visualization systems.
Remote computational services: This group of services refers to the required computation tools and to the rover real time system that are necessary during the private or group sessions described under Workspace View and the Activities and Tools View. An example of this is the MMTD rover path planning service, which calculates the optimal path for the rover to travel to a point of interest by taking the topology of the terrain into consideration. Other simulation services considered in this project include the integration of the ASIMUT tool  for atmospheric simulation. These services are geographically located in the facilities of the partners responsible of the tools in order to facilitate their management (similar to the service oriented approach of ).
Collaboration services: These services represent the functionalities presented under the Meeting Process View, Workspace View, Team Members View and Communication View. This group of services is responsible for managing the collaborative sessions, the workspaces, the network distribution, and the communication between users. This also contains the low-level technology-centric aspects about the network architecture and the distribution approach used. This approach is discussed in Section 5.3.
5.2.3 Data layer
The data layer provides the data access service for the service layer to store and retrieve different types of information corresponding to the Information View.
Regarding the scientific data, the terrain datasets are optimized for visualization using the HEALPix tessellation . The atmospheric datasets are converted and stored in the VTK (Virtual Toolkit) format  using the MOLA coordinate system as a reference system. The interesting thing about getting all these datasets in the same reference system is that this opens the door to make comparisons. For example, at some point in Use Cases 2 and 3, the Tohoku ground-based observations, PFS (satellite observations) and BGM4 (model) are compared while geographical information is still provided by MOLA and HRSC.
Regarding the engineering data, the MMTD images consist of a library of images taken by the real rover in the MMTD facility, in a similar way to the MSL/MER library of images taken by the NASA rovers on Mars. The orbit data consists in timestamped positions of the natural and artificial satellites of Mars. Therefore, it is possible to travel back in time to the particular date when an observation or picture was taken and check the position of the satellites and the rovers on the surface on that date.
Security is an important aspect of the overall system, since some of the data is only accessible to the core users. Therefore, security mechanisms need to be applied to all the architecture layers, especially within the service layer, since it is where most of the services that access archived data are available and where the network connections are managed. The system architecture is depicted in Fig. 8 as “layers with sidecar” as described in , meaning that each layer can use security features.
5.3 Architecture deployment
The overall system makes use of a hybrid network architecture approach in which all the user and session management messages are sent using a client-server architecture, while the user and object positions are sent using a peer-to-peer architecture to provide faster response in interaction tasks. The messages exchanged are encrypted using an asymmetric public-key cryptographic system so that just the allowed partners can read them. The server in the overall client-server architecture in this case is the CDServer located at the mission control center, which provides an additional level of security as the CDServer checks every message to make sure they are allowed at that time in the meeting. The CDProxy allows external users who typically have a random IP address to connect to the core system providing an additional level of security for external connections, as the CDServer can only be reached by the IP addresses of the core members of the consortium.
In order to support telepresence of the users, every core facility should have 3D user capture hardware to support 3D user construction. A separate peer-to-peer arrangement is supported between the telepresence clients in order to offer faster response. However, in the current implementation, it is only available in one of the nodes (OCTAVE at the University of Salford) .
Finally, remote computation servers can be accessed through the CDServer for compute-intensive simulations requests.
With regards of the design evaluation methods described in , the evaluation performed during the development of the artifact is observational. This evaluation was carried out mainly through the study of the artifact while it was being used by the end users during each of the three use cases created for its validation. These use cases were designed following an incremental approach. Since the purpose of this project was to develop a system that can be used in current and future European missions, the use cases were based on relevant and common scenarios on space science and engineering, designed with the help of the end users of the consortium.
The use cases were used for a functional validation of the development of the system. In these validations, the end users (as experts) tested the system to assess if all the functionality and actions described in the use cases could be performed.
The evaluations tried to gather as many end users within the project partners as possible in order to get feedback that could help to improve the system. Four expert users took part in use case 1 joining from two science home bases, one located in DLR (Germany) and the other in the University of Salford (UK), one engineering home base located in TASI (Italy) and the mission control center located in Altec (Italy). The use cases included the use of a different range of VR displays (from PowerWalls to the OCTAVE) and interaction technologies (mainly optical systems using passive markers for head and hand tracking, and joysticks). The remote facilities were linked using CROSS DRIVE’s distributed architecture and had an audio connection so that the participants could discuss the mission and tasks. For use cases 2 and 3, other core and external users joined as atmospheric experts from BIRA (Belgium), INAF (Italy) and Tohoku University (Japan), making a total amount of 8 users connected simultaneously (which coincides with the minimum number of users as stated by the system requirements in Section 4.2).
For use case 2, the focus was on the visualization, analysis and discussion related to state of the art research on Mars atmosphere. The objective was to explore the landing site location using global views of Mars to analyze concepts related to the atmospheric temperature fields, suspended dust and ice, global circulation, and dynamics. Data coming from models is compared to real observations from Earth and satellites, this helps to see the structure of the atmosphere in any particular day. So, an entry, descent, and landing can be later studied. The middle row screenshots (b) shows different atmospheric datasets being displayed, from left to right, 3D temperature fields from PFS satellite observations, ice opacity using ground observations, and volume rendering of ozone using BGM4 model.
Finally, use case 3 was focused on the visualization and analysis of the engineering data related to the operational phase of a robotic mission. The story behind this use case was to plan rover operations in the previously selected landing site. Therefore, it included the tasks for use cases 1 and 2 and added simulated rover operation and the transmission of telecommands to the real rover on the MMTD facility (simulating the rover on Mars). The bottom row (c) of Fig. 11 shows the Mars Express spacecraft orbit over the terrain under study. After this, the activities described for use cases 1 and 2 were carried out before starting the rover path planning in the simulated terrain (middle picture). Finally, the third picture shows the view of a camera located in the MMTD while the real rover went through the path defined in the simulated environment.
6.1 Results of the observational evaluation
We used different techniques to get feedback from the end users. Namely, we encouraged them to think aloud during the validations, observed how they coped with the system and interviewed them afterwards. The execution of the use cases demonstrated that the system performed properly, supporting the distributed interaction among users.
During the validation of the first use case, we noticed that it was difficult for some users to navigate to the region under study (some of them had none or very little experience with VR devices and displays). To solve this problem, the possibility to travel to a set of predefined locations was included, as well as to the location of any GIS element created on the surface. This was particularly helpful, as one experienced user could create a landmark on the terrain, name it, and ask the rest of the team to click on its name to be teleported to that location. Moreover, it was hard for them to see these GIS elements (i.e. landmarks on the terrain) from a planetary view, as their size was fixed. This was solved by making them scale with the distance, so they had a fixed size regardless of the distance from the spectator.
During the second use case, the scientists identified that it was not easy to get used to the combination of buttons designed to perform most of the actions, as they increased significantly from use case 1. This lead to a redesign of the interaction, which ended up including a floating menu in front of the user (as can be seen on the left-hand side of Fig. 9).
Finally, the third use case provided feedback on functionality that would be interesting to include in future work. For example, some users suggested that it would be interesting if the pictures taken by the real rover on the MMTD were included in the virtual environment to enrich the system with data coming from the real world (in a real mission, these data would come from the rover on Mars). This could also include the 3D generation and placement of the terrain in front of the user using the stereoscopic camera mounted on it.
Apart for this observational validation with end-users based on case studies, a formal experimental evaluation studying the usability of the system is foreseen as future work.
6.2 Comparison with other virtual meeting systems
Due to the particular characteristics of the CROSS DRIVE system, it is not easy to compare it to other virtual meeting solutions available. One of its main characteristics, the visualization of geographic and atmospheric data is not available in any other virtual meeting environment.
Nonetheless, Table 2 provides a comparison of CROSS DRIVE with 8 other virtual meeting systems that are currently available. As the table shows, no other solution provides support for the visualization of large scale data, 3D avatars reconstructed from video, full awareness of non-verbal behavior (NVB) or the connection to physical systems. However, other platforms provide functionality that is not available within CROSS DRIVE, such as support for mobile devices, video chat, the ability to load custom 3D models, the inclusion of a shared whiteboard or the possibility to draw in 3D space. These were not considered to be essential characteristics during the analysis and design stages, but would certainly help the communication of the users in some circumstances.
Skype is a well known and broadly used tool to hold online meetings. In fact, as it is mentioned in the introduction, it is currently used in space mission planning. However, even though it is able to convey a wide range of NVB, its drawbacks are apparent. The main reason is that the user is not immersed within the data, so it is hard to contextualize the NVB (i.e., eye-gaze).
Comparison of CROSS DRIVE with other virtual meetings systems
The CROSS DRIVE project aimed at supporting the landing site selection for the ExoMars rover mission. As there have been few missions, a procedure for landing site classification is yet to emerge. Thus, characterizing landing sites is a very individual process and always highly adapted to the specified space mission goals. Luckily, very precise descriptions about NASA’s approaches for various missions, like 2020 Mars rover , InSight , Mars Science Laboratory , and Mars Exploration program  have been published. Little is published about ESA’s approaches (e.g. for Beagle-2 or Schiaparelli) but members of the CROSS DRIVE team participated at the landing site characterization for the ExoMars Rover mission. They reported small local teams working isolated in their own institutes on very specific scientific questions. Tele-conferences were organized to discuss progress and results of characterization issues and potentially good landing site candidates by sharing power-point presentations. We talked to involved planetary researchers about the potential of distributed interactive environments, like that offered by CROSS DRIVE, to improve collaborative landing site discussion sessions. A high demand was identified for interactive presentations of basic information (like elevation models) and derived surface characterizations, to leverage a common understanding of findings and open issues. On the other hand, CROSS DRIVE was considered to be much too complex to be supported by simple tele-conferences. Unfortunately, the space scientists already worked on the site selection as CROSS DRIVE came into play, so completed decision making before really making use if it. However, this closeness in timing allowed space scientists to imagine how CROSS DRIVE might have helped. They felt that meetings on virtual planets for planning future missions was very attractive.
An important prerequisite for uptake would be the reduction of the hardware resource requirements. Immersive virtual environments, like multi-wall installations, might be advantageous but much too expensive for sporadic use. With the availability of cheap head-mounted displays, virtual reality based collaborative sessions becomes much more affordable. Also augmented reality (AR) devices (like Microsoft’s HoloLens) might be integrated. In follow-up projects of CROSS DRIVE, teams are already working on the integration of AR devices and to tackle real-time issues accompanied with such wireless visualization systems. Eventually, this ends always in level-of-detail (LOD) techniques which adapt the complexity of the scene with respect to eye distance but also to the performance of the used hardware. This had been considered already in the development of CROSS DRIVE’s 3D visualization methods in order to maintain a usable interactive session for the scientists. The rendering is decoupled from the data processing. According to the hardware performance, the scene complexity has been increased iteratively up to the point where the frame rate drops below a threshold. This guarantied 60 fps stereo projection in interactive, immersive environments, whereas good visual results with minimum 30 fps in mono was achieved on less powerful laptops. A user adjustable parameter controlling the level-of-detail factor offers to manage the trade-off between frame rate and visual quality.
Although many desktop applications permit more precise map-based GIS tools, immersive environments can provide additional advantages over desktop systems when 3D perception and direct interaction is beneficial. Thus, we integrated sub-surface radar data from SHARAD (SHAllow RADar, instrument on Mars Reconnaissance Orbiter) for evaluating correlations between sub-surface profiles and the surrounding terrain. While in desktop applications, the radar image is depicted side-by-side with the terrain map, we placed the radar profile at the exact position orthogonal to the terrain surface. Additionally, the half side of the terrain between the user and the sub-surface has been drawn semi-transparent, which allows direct view to the radar profile and the terrain surface in the back. This approach directly depicts correlation of detected radar features and the continuation on the terrain. However, a correct perception is just possible with stereo projection.
Another tool we have implemented for virtual reality based environments has been the dip-and-strike tool. This helps to mark points on sedimentary rocks to specify connected stratigraphic levels. A plane is then automatically constructed consisting all marked points. Just in stereoscopic environments, orientation and inclination can directly be perceived and assessed. Additionally, the comparison with the result from a GIS tool (ArcMAP by ESRI Inc.) demonstrated the robustness of the implementation.
The planetary scientist confirmed significant advantages over tools they used so far on desktop systems. Beside the depicted approaches, they also found CROSS DRIVE tools to enable placing landing ellipses and landmarks, drawing rover paths, and constructing topographic cross sections (for slope analysis) highly helpful for geological landing site characterization. They also confirmed the quality of the measurements by comparing the result obtained from independent measurement software tools they normally use.
There are several ways in which our system could be improved. One way is related with communication in collaborative systems. There is a large amount of information exchanged between users during CROSS DRIVE collaborative sessions, mostly spoken, which makes it difficult to document or log what happens in them. If these conversations were automatically converted into text, the use of AI, including natural language processing tools, would allow the creation of reports for each session, extracting information of the progress, the decisions taken, the strategy followed, and so on. This would be useful to, for example, document the session for future references or dissemination purposes, or even to identify recurring problems that may require improvements in the system. The current user input interface is based on selection of 3D menu items, through a pointer. Alternative natural language interfaces could be developed that some might find more intuitive. Such might also make it easier for people to interpret what team mates are doing when controlling the system, although at the same time it could confuse conversation.
7 Conclusions and future work
The main contribution of this paper is the detailed design of a software architecture that can support multi-functional team collaboration for the space industry (science and engineering). Fragmentation of datasets and expertise leave little scope for collaborative activities in current space exploration and mission planning tasks. This paper details the investigation, design and development of a collaborative environment for multi-functional dispersed teams, to address this problem. This is done within the context of design science in information systems research methodology. The research question concerns the nature of a system architecture that supports team collaboration for space science.
This paper outlines the architectural design of a platform to support computer-mediated meetings. In these meetings, the scientists and engineers can be immersed into the data, interact in a natural way with the environment, and use simulation focused verbal and non-verbal communication between team members. The conceptual architecture is defined using a generic 3-layered architectural pattern enriched with the description of six system views. These views formed the basis for defining the system requirements and designing and implementing the final system architecture. The system requirements were elicited from the usage scenarios described in conjunction with the end-users.
The system was validated by three different use cases representing a wide range of common usage scenarios for the European space science (mainly ExoMars). Unfortunately, the need for expert users prohibited sufficient sample size for meaningful quantitative evaluation.
It is expected that the successful outcome of CROSS DRIVE will have a significant impact on how future missions, such as ExoMars, will be designed and validated; the way space scientists will conduct space science research in the future; the mobilization of the best expertise in various fields of science for the analysis and interpretation of space data; and in how distributed scientists and researchers will work together to engage in data analysis and interpretation.
Furture work could include the use of AI including natural language processing, both to gain information about how decisions were made and to make the interface more intuitive to some. Integration of head-mounted displays would provide a more affordable solution although hiding the face provides a challenge for both local and video based telepresence collaboration. Augmented reality technologies could also be integrated, but as current approaches have a low field of view not well suited to visualization of big terrain datasets and complex atmospheric data. Quantitative evaluation of the system could recruit from a larger non-expert user group, to answer generic usability questions.
The work presented in this publication has received funding from the European Union Seventh Framework Programme (FP7/2007- 2013) under grant agreement no. 607177. We thank all partners in the CROSS DRIVE team for their contribution, recommendations, and evaluations of the depicted software framework.
- 1.(2018) VTK: Virtual toolkit (online). https://www.vtk.org/, accessed: 2018-05-20
- 2.Arvidson R, Adams D, Bonfiglio G, Christensen P, Cull S, Golombek M, Guinn J, Guinness E, et al. (2008) Mars exploration program 2007 phoenix landing site selection and characteristics. J Geophys Res Planets, vol 113(E3). https://doi.org/10.1029/2007JE003021
- 3.Bassanino M, Wu KC, Yao J, Khosrowshahi F, Fernando T, Skjærbæk J (2010) The impact of immersive virtual reality on visualisation for a design review in construction. In: 2010 14th international conference information visualisation (IV). IEEE, pp 585–589Google Scholar
- 4.Basso V, Pasquinelli M, Rocci L, Bar C, Marello M (2010) Collaborative system engineering usage at Thales Alenia Space Italia. In: Proceedings of the conference on system and concurrent engineering for space applications SECESA, ESAGoogle Scholar
- 6.Benford S, Fahlén L (1993) A spatial model of interaction in large virtual environments. In: Proceedings of the third conference on European conference on computer-supported cooperative work. https://doi.org/10.1007/978-94-011-2094-4. Kluwer Academic Publishers, Milan, pp 109–124zbMATHGoogle Scholar
- 7.Bierbaum A, Just C, Hartling P, Meinert K, Baker A, Cruz-Neira C (2001) Vr juggler: a virtual platform for virtual reality application development. In: 2001 Proceedings of the IEEE virtual reality. IEEE, pp 89–96Google Scholar
- 8.Bowers J, Pycock J, O’Brien J (1996) Talk and embodiment in collaborative virtual environments. In: Proceedings of the SIGCHI conference on human factors in computing systems - CHI ’96, ACM Press, March 1995. https://doi.org/10.1145/238386.238404, pp 58–65
- 9.Clements P, Garlan D, Little R, Nord R, Stafford J (2003) Documenting software architectures: views and beyond. In: https://doi.org/10.1109/ICSE.2003.1201264, pp 3–4
- 10.De Lucia A, Francese R, Passero I, Tortora G (2008) Slmeeting: supporting collaborative work in second life. In: Proceedings of the working conference on advanced visual interfaces. ACM, pp 301–304Google Scholar
- 12.Fernando T, Wu KC, Bassanino M (2013) Designing a novel virtual collaborative environment to support collaboration in design review meetingsGoogle Scholar
- 13.Fraser M, Glover T, Vaghi I, Benford S, Greenhalgh C, Hindmarsh J, Heath C (2000) Revealing the realities of collaborative virtual reality. In: Proceedings of the third international conference on collaborative virtual environments. ACM, pp 29–37Google Scholar
- 14.Fuchs H, Bishop G, Arthur K, McMillan L, Bajcsy R, Lee SW, Farid H, Kanade T (1994) Virtual space teleconferencing using a sea of cameras. In: Proceedings of the first international conference on medical robotics and computer assisted surgery, vol 26, pp 161–167Google Scholar
- 15.García AS, Roberts DJ, Bar C, Wolff R, Dodiya J, Gerndt A (2015) A collaborative workspace architecture for strengthening collaboration among space scientists. In: Aerospace, Institute of Electrical and Electronics Engineers, vol 2015-June. https://doi.org/10.1109/AERO.2015.7118994
- 16.Geyer W, Richter H, Fuchs L, Frauenhofer T, Daijavad S, Poltrock S (2001) A team collaboration space supporting capture and access of virtual meetings. In: Proceedings of the 2001 international ACM SIGGROUP conference on supporting group work. ACM Press, pp 188–196Google Scholar
- 18.Golombek M, Kipp D, Warner N, Daubar IJ, Fergason R, Kirk RL, Beyer R, Huertas A, Piqueux S, Putzig NE, Campbell BA, Morgan GA, Charalambous C, Pike WT, Gwinner K et al (2017) Selection of the insight landing site. Space Sci Rev 211(1):5–95. https://doi.org/10.1007/s11214-016-0321-9 CrossRefGoogle Scholar
- 22.Group TO (2008) Collaboration oriented architectures. https://collaboration.opengroup.org/jericho/COA_v1.0.pdf
- 23.Gwinner K, Scholten F, Preusker F, Elgner S, Roatsch T, Spiegel M, Schmidt R, Oberst J, Jaumann R, Heipke C (2010) Topography of mars from global mapping by hrsc high-resolution digital terrain models and orthoimages: characteristics and performance. Earth Planet Sci Lett 294(3-4):506–519CrossRefGoogle Scholar
- 25.Hofmeister C, Nord R, Soni D (2000) Applied software architecture. http://www.pearsonhighered.com/educator/product/Applied-Software-Architecture/9780201325713.page
- 26.International Organization Of Standardization (2011) ISO/IEC/IEEE 42010:2011 - systems and software engineering – architecture description. ISOIECIEEE 420102011E Revision of ISOIEC 420102007 and IEEE Std 14712000 2011(March), pp 1–46. https://doi.org/10.1109/IEEESTD.2011.6129467
- 27.Jaakkola H, Thalheim B (2011) Architecture-driven modelling methodologies. In: Frontiers in artificial intelligence and applications, vol 225, pp 97–116, DOI https://doi.org/10.3233/978-1-60750-690-4-97
- 28.Jun Z, Ya H, Cao Y (2012) A distributed virtual geographic environment system for risk assessment of dam-break. In: 2012 9th international conference on fuzzy systems and knowledge discovery (FSKD). IEEE, pp 2541–2545Google Scholar
- 29.Kobayashi N, Tokunaga AT, Terada H, Goto M, Weber M, Potter R, Onaka PM, Ching GK, Young TT, Fletcher K et al (2000) Ircs: infrared camera and spectrograph for the subaru telescope. In: Optical and IR telescope instrumentation and detectors, international society for optics and photonics, vol 4008, pp 1056–1067Google Scholar
- 31.Maher ML, Rosenman M, Merrick K, Macindoe O, Marchant D (2006) Designworld: an augmented 3d virtual world for multidisciplinary, collaborative design. In: Proceedings of CAADRIAGoogle Scholar
- 32.Martinez D, Molina JP, García AS, Martinez J, Gonzalez P (2010) AFReeCA: extending the spatial model of interaction. In: Proceedings - 2010 international conference on cyberworlds, CW. https://doi.org/10.1109/CW.2010.63, vol 2010, pp 17–24
- 33.McEwen AS, Eliason EM, Bergstrom JW, Bridges NT, Hansen CJ, Delamere WA, Grant JA, Gulick VC, Herkenhoff KE, Keszthelyi L et al (2007) Mars reconnaissance Orbiter’s High Resolution Imaging Science Experiment (HiRISE). J Geophys Res Planets, vol 112(E05S02). https://doi.org/10.1029/2005JE002605
- 34.Méndez R, Flores J, Castelló E, Viqueira JRR (2018) New distributed virtual tv set architecture for a synergistic operation of sensors and improved interaction between real and virtual worlds. Multimed Tools Appl, pp 18999–19025Google Scholar
- 35.Mine M (1995) Virtual environment interaction techniques. Tech. rep. https://doi.org/10.1.1.38.1750
- 36.Moerland E, Deinert S, Daoud F, Dornwald J, Nagel B (2016) Collaborative aircraft design using an integrated and distributed multidisciplinary product development process. In: 30th Congress of the international council for aeronautical sciencesGoogle Scholar
- 37.Neary L, Daerden F (2011) Modelling the martian atmosphere using the gem-mars gcm. In: Mars atmosphere: modelling and observation, pp 68–69Google Scholar
- 40.Roberts D, Wolff R (2004) Controlling consistency within collaborative virtual environments. In: Proceedings - eighth IEEE international symposium on distributed simulation and real-time applications, DS-RT. https://doi.org/10.1109/DS-RT.2004.13, vol 2004, pp 46–51
- 42.Russell DM, Poltrock S, Greif I, Olson JS, Olson GM (2016) What did we get right and wrong about cscw during the past 30 years?. In: Proceedings of the 19th ACM conference on computer supported cooperative work and social computing companion. ACM, pp 201–203Google Scholar
- 43.Seu R, Phillips RJ, Biccari D, Orosei R, Masdea A, Picardi G, Safaeinili A, Campbell BA, Plaut JJ, Marinangeli L et al (2007) SHARAD sounding radar on the mars Reconnaissance Orbiter. J Geophys Res Planets, vol 112(E05S05). https://doi.org/10.1029/2006JE002745
- 44.Shames P, Skipper J (2006) Toward a framework for modeling space systems architectures. In: AIAA 9th international conference on space operations (SpaceOps), Rome, ItalyGoogle Scholar
- 46.Sommerville I (2004) Software engineering (7th edition), p 784Google Scholar
- 47.Stindt D, Nuss C, Bensch S, Dirr M, Tuma A (2014) An environmental management information system for closing knowledge gaps in corporate sustainable decision-making. In: Proceedings of the international conference on information systemsGoogle Scholar
- 48.Tanenbaum AS, Van Steen M (2017) Distributed Systems, 3rd ed., distributedsystems.netGoogle Scholar
- 49.Vandaele A, Kruglanski M, De Mazière M (2006) Modeling and retrieval of atmospheric spectra using asimut. In: Atmospheric Science Conference, vol 628Google Scholar
- 51.Westerteiger R, Gerndt A, Hamann B (2012) Spherical terrain rendering using the hierarchical HEALPix grid. In: OASIcs-OpenAccess series in informatics, Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, vol 27Google Scholar
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.