Keywords

1 Introduction

This paper presents a novel Information Visualization (IVIS) technique called MapCube technique which supports visualizing geographic information spaces within the confines of limited screen real estate as found on mobile devices. Using this technique, any kind of event associated with locations on- or off-screen, can seamlessly be visualized with minimal disruption to a main task, that currently requires a geographic map on screen.

This paper addresses this IVIS problem in the light of a Location Based Service (LBS) use case supporting real-time ridesharing or dynamic ridesharing, defined by [1] as a “Paratransit-like service [that] allows travelers to be joined in real-time to provide taxi-like responsiveness”, which will subsequently simply be called ridesharing.

To capture this use cases more precisely, the term Focus Task (FT) is helpful. The term encapsulates the two meanings of focus as related to a user task. Firstly, a FT is a task that the user is currently and continuously performing, on which the user is focused. Secondly, to perform the task, the focus view, a display the user requires for performing the task, is focused on certain information that is pertinent to the task. The goal of this paper is to address situations, where the focus view more specifically implies that a limited geographic area is visible and displayed on screen as focus view, whereas surrounding areas remain off-screen. For the ridesharing use case, the FT is following navigation instructions on a car navigation display.

Ridesharing entails re-routing the driver to pick up a passenger en-route. The intended use case aims at allowing drivers to be involved in a decision making process to increase the number of matched rides [2] by allowing the driver to perceive possible benefits while identifying passengers that are possible ridesharing-matches, going beyond established ride-matching algorithms without user feedback, such as [3]. This includes support to allow previously neglected Ride Match Criteria (RMC) to be considered during such decision making. Deciding whether or not to pick up a passenger is a second task that becomes relevant in addition to the FT. These additional RMC could now become relevant in the context of the decision making, e.g., the location of a Point Of Interest (POI) relative to the detour-route, towards the pick-up-point of the potential passenger, which are all off-screen but at the same time highly decision relevant.

The RMC relevant to the use case are similar to those objects identified by [4] in relation to elementary actions in context of a LBS, i.e. inherently spatial Geographic Information Objects (GIOs), with geometries including points, polylines, polygons, as well as topological relationships among such objects. These kind of GIOs and relationships among them will be called Complex Geographic Context (CGC), to distinguish them from simple points, as detailed in Sect. 3. Displaying this CGC is a requirement for the IVIS technique to be employed. In this way, it becomes e.g. possible to present the driver with certain POIs such as a gas station, parks and similar, relative to the current position, the passenger’s position and the connecting route of the detour. The information pertinent to this second task is to be displayed on-screen without interrupting the FT which defines a requirement for the IVIS technique to be employed.

The following section will review existing IVIS techniques that are candidates for a solution.

2 Related Work

This section introduces IVIS techniques that directly address the challenges of the ridesharing use case

For visualization techniques, e.g. as introduce by [5], the two views for focus and context are spatially separated. This technique is useful in many situations, for mobile devices though, the restricted space disallows Context Areas (CAs) of larger extent. An issue of a more general nature that stems from the spatial separation of focus and context, is that it is more difficult for the user to develop an integrated mental model [6]. As pointed out by [7], humans remember layout in space with respect to our vantage point using egocentric coordinates. The fact that focus and context are actually two different views, is problematic in this respect as the frame of reference established while performing a task within the Focus Area (FA), is based on egocentric coordinates. Also, visually linking an object inside to objects outside the FA that are only visible in the overview, is not directly possible, but required to show the route towards the passenger.

Using to control the map content, is an example for temporal separation of focus and its context within the information space. Focus and context are not integrated for techniques, and the temporal separation requires the user to memorize information when changing views. Ultimately, the inherent need to leave the focus view is not acceptable while the user is performing a FT such as the navigation of the ridesharing use case.

The techniques that are part of the category integrate both views more seamlessly by presenting the focus embedded into the context. Building on earlier work in [8], Furnas introduces generalized fisheye views with [9], a general framework for dealing with the space limitations that computer displays impose, by treating context differently from the focus, e.g. using distortions to reduce the context size.

[10] presented an application where a planar graph with cities as nodes is adapted based on the selected focus node. Non linear magnification fields [11] have been introduced as a more abstract representation form for these distortions, and were further developed by [12] and applied to an existing geographical map redering.

[13] introduces glue in addition to focus and context, extending upon the fisheye concept to achieve geographical map renderings that provide map renderings on mobile devices.

The Perspective Wall [14] is an early example of a view that can easily be applied to geographic map data applying a visual transfer function [15] to context, similar to a perspective projection. Later work, the Document Lens [16] extended this principle and used a truncated square pyramid as projection surface in 3D to provide focus and context within a text document. What remains problematic for views in general when regarded with respect to the small screen real estate available on mobile devices, is that like in the case of the pyramid sides significant screen real estate is occupied at all times, hence also affecting the FT.

The final category of cue-based techniques is related to visual cues such as color hue and saturation used to highlight or de-emphasize some of the visualized objects, as well as cues that act as proxy for the actual object, e.g. arrows within the FA that provide context by pointing at locations off-screen. A similar technique was introduced by [17]. It is able to describe more attributes of off-screen objects. The idea is developed further by a solution called Halo [18] where POIs at off-screen locations are surrounded by circles that are just large enough to reach into the visible FA. In this way the user can judge the distance and location of the POIs based on the arc position along the screen edge, as well as curvature and size of the visible arc segments. It could be difficult though, to look at the arc segments and determine the relationship of a POI to an off-screen street or highway or indicate the street itself.

In summmary, and are not suitable due to the separation of focus and context, techniques lack expressiveness, and existing techniques deprive the FT of space. The following section will describe the MapCube visualization technique that addresses these challenges.

Fig. 1.
figure 1

MapCube focus and context model [19].

3 MapCube

The MapCube Focus and Context Model (MFM) as depicted in Fig. 1 summarizes the kinds of GIO-geometry the MapCube needs to be able to display, relates them to visualization as well as Location Context (LC) aspects of the ridesharing use case and LBS in general, giving examples for CGC. The term LC encapsulates a combination of the spatial distributions from in IVIS with a specialization of context as defined by [20], emphasizing spatial aspects.

Region 1 at the center corresponds to the area immediately surrounding the current position of the user. It is the FA that is displayed on-screen, surrounded by the light gray area representing the CA that is off-screen. The Level Of Detail (LOD) for this region is appropriate for giving navigation instructions to the user. Showing instructions for both near and far destinations, have both to be supported by the MapCube conceptual model as indicated by the arrows to the right of region 1. This means, in particular with respect to CAs, that very far destinations need to be supported, and consequently a CA with large extent within the geographic information space needs to be considered.

All of the following regions region 2 to region 5 pertain to GIOs that are further away from the current location, that are context objects inside the LC.

Region 2 groups GIOs that are independent of time. Besides the highway network as example of polyline data, region 2 contains all the geometry types that must be visualized and gives some examples of the kind of context these can represent, including only static context, i.e., location, path, area or thematic attributes that do not change over time.

Region 3 introduces time, in the sense of an event, meaning that the state of a certain location changes, e.g., a possible ride-match candidate that is communicated to the driver.

Region 4 introduces the case that the location of a point changes, it is a moving object [21], e.g., also allowing for cluster formation.

Finally, region 5 extends this notion of geometry change over time to polylines and areas. In light of the ridesharing use case a relevant example are traffic jams.

3.1 Visualization Constraints

The following paragraphs will bring requirements stemming from the problem statement derived from ridesharing use case, into the domain of constructing a visual form, as described in [22], as part of the reference model of IVIS, or IVIS pipeline. Derived from the requirements of the ridesharing use case, the following lists visualization constraints that the MapCube model has to comply to:

  • Not changing the on-screen size or position of the focus view associated with the FT map and User Interface (UI)

  • Not changing size or position of the already displayed objects

  • Not removing already displayed objects

With respect to the first point, this means in particular the FA required for the FT must still occupy the complete screen. The information necessary for the FT remains visible and unchanged inside the FA. The mental model established within the FA, in terms of egocentric coordinates, is also maintained as a frame of reference for new information the LC. The maintained FA includes for example the road network and the highlighted road to follow. Requiring these constraints to be fulfilled, seamless transition from the focus view to a view including LC is possible without disturbing the FT.

Fig. 2.
figure 2

MapCube concept illustration.

3.2 Real World Analogy

This section is related to the visual form of the IVIS pipeline. Building on the visual constraints described in the previous section, it provides an overview of how visual form is create that satisfies these constraints, introducing a real world analogy that is the basis for subsequent visual mappings, visual structures, and views [22]. The MapCube conceptual model starts from the premise of a simple solution idea. The solution idea is to start from the physical or real world analogy of a geographic paper map, and a card board box. Simply gluing a cross shaped section of the map onto the box as shown in Fig. 2a and b, is the first step. The map portion on the display of the phone is the lid of the box, the area marked f for focus, at the center of the marked CAs to the west, north, east and south. Taking the lid off and looking into the box allows for looking at the inner walls. Of course with the physical analogy, only the card board is visible as the map was glued to the outside. Imagining the map paper and card board to be translucent plastic instead, makes the map visible on the inside of the box as well, the map is visible from behind. This is the geographic map context that is to be shown in addition to the already visible map portion that is on the lid. Looking into the box in such a way that all four walls are equally visible as depicted in Fig. 2c, presents a view similar to what is intended for the MapCube conceptual model. In case of the MapCube the lid is not taken off, but the visible map portion is rendered partially transparent to keep the FA visible and display the CA at the same time, visible behind the FA, as depicted in the illustration in Fig. 2c. This visualization strategy represents an extension of the substrate folding [22]. In an extension of substrate folding [22], the particular use of the 3D cube metaphor, perspective projection, and transparency is termed bending, as previously off-screen content is bent back into view.

A number of Perceptibility Problems (PPs) have been identified, of which the most pressing are that coherent perception of 3D space is lost, and composite fusing [23], i.e., ambiguity in associating a shape with foreground or background. In a prototypical implementation, the Evaluation Prototype (EP), these have been addressed, most importantly by modelling the GIOs inside the CA as actual 3D objects that are part of the 3D background. These objects are placed inside the cube, attached to the cube faces. Figure 3 shows the prototypical implementation of this conceptual model of the MapCube. Figure 3a shows the Focus Only Mode (FOM), with a red route in the upper right corner, leaving the screen towards the destination that remains invisible off-screen. Figure 3b shows the Focus and Context Mode (FCM) where the FA is transparent to reveal how the the route continues as 3D tube towards the destination inside the CA. Here, the GIOs are modelled as actual 3D objects to be able to take advantage of 3D depth cues such as shape-from-shading [23] to invoke preattentive visual processing [24, 25] that aids distinguishing between focus objects in the foreground and context objects that are part of the three dimensional background.

Fig. 3.
figure 3

MapCube display rendering.

4 Evaluation

Several aspects of the MapCube technique are relevant with respect to evaluation. Before using the technique in a real life setting for example, the most important aspect are safety considerations, including the risk of distraction introduced by the MapCube technique. This aspect is left as future work because the aspect of showing feasibility of the prototypical implementation (and satisfaction of basic usability requirements in light of PPs) needs to be addressed first. Hence, the goal of the evaluation presented in this section is to investigate the feasibility and usability of the MapCube IVIS technique that satisfies the IVIS requirements identified for the use case of real-time ridesharing.

A summative evaluation that employs an empirical, quantitative method, by means of a user study was performed to compare the MapCube EP with a best of class technique that satisfies some of the constraints of the use case, the Halo technique. This evaluation shows the feasibility of the solution as well as the effectiveness of the solutions implemented to address the PPs.

4.1 User Study

Following the approach of [26] to perform the evaluation, a usability metric is defined that allows for comparing the two techniques. These metrics formalize the measurement of quality of the techniques. The summative evaluation is concerned with two usability dimensions [27]. First, the effectiveness (i.e. “is the user able to complete the task” [28]) which is measured by means of the usability metric error rate. Second, the efficiency (i.e. “the amount of effort required to complete the task” [28]) which is measured by means of the usability metric time-on-task. These two dependent variables are determined for two groups, created by variation of the independent variable “visualization technique”.

To acquire these measurements, a user task is defined that allows for comparison of the two techniques. At the same time it reveals whether perceptibility issues of the MapCube technique have been addressed successfully. Six hotels are positioned at off-screen locations and the user task is to count POIs that are located in the vicinity of a hotel. An incorrect number POIs is counted as an error for the respective hotel, yielding the error rate as average across all hotels and participants. The evaluation frame work measures the time-on-task of each participant for all six hotels and computes the average for all participants. For Halo, both the hotel and POIs are represented using arc segments. For the MapCube they are represented as 3D objects embedded into the geographical map rendering as described before (i.e., attached to the cube faces).

Every participant performs the user task with both visualization techniques. To avoid carryover effects the first technique uses a different test data set. The measurements are averaged regardless of the data set. The test data is comprised of real locations. The POIs are tourist attractions in the vicinity of hotels selected in then Cologne area in Germany.

4.2 Results

The evaluation was performed with 10 participants. Each of them performed the user task with both visualization techniques, i.e., overall 20 overall 20 experiments were performed revealing measurements for each of the two IVIS techniques’ usability metrics in the dimensions of error rate and time-on-task.

The MapCube error rate is 2.1 (standard deviation 1.5) which is half the Halo error rate of 4.2 (standard deviation 1.4). This means the MapCube performs better in terms of effectiveness. The MapCube average time-on-task is 52.3 (standard deviation 7.78). It is roughly half that of the Halo average time-on-task which is 93.8 (standard deviation 22.26), i.e. the MapCube is better in terms of efficiency.

For both measures the difference in population means is due to the varied independent variable “visualization technique”. This has been shown by a one-way Analysis of Variance (ANOVA). For the underlying measurements a statistically significant effect of the visualization technique can be reported with 95 % confidence (F-quantile \(F_{1,18} = 4.41\), degrees of freedom for 2 groups and 2 measurements for each of the 10 participants). For the error rate with an F-statistic of \(F(1, 18)=7.85 > F_{1,18}\), and for the time-on-task with an F-statistic of \(F(1, 18)=16.5 > F_{1,18}\).

5 Summary and Outlook

This paper set out to investigate the feasibility of an IVIS technique that could satisfy the requirements identified for the use case of real-time ridesharing. The presented MapCube model and IVIS technique, is the proposed solution. The MapCube prototype was built to evaluate this solution.

The MapCube proved to be usable and outperformed a competing technique that is considered efficient. The prototype allows screen reuse as intended and can be used effectively to display geographic map portions that would otherwise remain invisible off-screen or merely hinted at. In particular the type of geometry that cannot be easily hinted at (such as, e.g., lines and shapes, as well as relationships between objects) are supported by the MapCube. The additional map portions are presented within an unchanged frame of reference in egocentric coordinates, that are in line with the mental model the user developed while pursuing the FT of following navigation instructions. Because of this, for a GIO that would otherwise remain invisible off-screen, visually linking becomes possible (e.g., by drawing a line along the route to that object within the geographic map rendering). Using transparency to transition from the FOM (which is used while pursuing the FT) to the FCM makes the technique unobtrusive and suitable to display event-related off-screen CGC that is only temporarily visible without interrupting the FT. As described before, the particular fashion of substrate folding termed bending allows perceiving of spatial relationships, shapes, map symbols, and text for CGC that would otherwise be off-screen. Bending a large off-screen information space back into view directly addresses the use case problem to be able to deal with far-scattered CGC.

Successfully ameliorating the PPs brought about by screen reuse, and doing so by employing 3D depth cues, makes a good case for choosing a 3D metaphor that enables use of these cues. An approach that is in principle applicable to any kind of with context of large 2D extent, that needs to be displayed within minimal screen real estate such as found with mobile devices. With the MapCube the additional constraint of having a FT can be satisfied.

As pointed out with respect to the evaluation, the investigation of in-situ safety considerations of the car-related usage scenario are mandatory before it is possible to apply the MapCube in a real-life ridesharing systems. With respect to future work, it is intended to encapsulate the MapCube prototype’s software by means of a software wrapper and application user interface that will allow for a seamless plug and play integration into visual user interface frameworks as well as graphical and web-based user-interface implementation technologies. This will require an operationalization of the IVIS reference model into a middleware software technology that can support the MapCupe Focus and Context Model from a software engineering point of view and allow for seamless and cost-effective plug and play integration with current state-of-the-art 2D and 3D display technologies.

Furthermore, the goal of our future work is the quality assessment and improvement of usability aspects of the existing prototypical MapCube implementation (based on Java, JOGL) and improved versions of the prototypical software implementation. Improvements are to be identified and evaluated with appropriate academic rigor, employing, e.g., further user studies. Especially an apriori study is to be devised (and described) to reveal existing usability problems with respect to learnability, effectiveness, efficiency, satisfaction as well as the overall utility of the technique in mobile application contexts. It will answer questions like: Is the technique indeed helpful? Can pertinent information be gathered, even at a glance?

To this end, several test data sets will be devised and considered for all usability evaluations. The identified problems are to be addressed by means of source code changes to the Java program of the prototypical implementation or a functionally equivalent implementation using WebGL or X3Dom. A related aspect that will be investigated independently is the impact of the technique regarding the FT (e.g. following car navigation instructions) and usage scenario (e.g., driving a car). This will for example answer the questions whether the technique can cause confusion during time sensitive critical situations, e.g. while driving.