1 Introduction

User interfaces (UIs) have always been limited by the properties and performance of the available information and communication technologies at the time, and by the input and display technologies in particular. In this paper, we discuss the relationship between emerging displays and novel computer use, and especially the potential impact of mid-air displays for ambient media systems and their user interfaces.

The interactive, “immaterial” walk-through FogScreen is a patented mid-air display. Viewers can reach through the mid-air screen or even cross through it to the other side. This opens up many new possibilities for engaging display and interaction in advertising, gaming, and other applications.

FogScreens have been installed at many theme parks, museums, night clubs, and have been used at many special events such as festivals, conventions and trade shows. While they are still somewhat of a novelty item, we discuss their medium-to-long-term potential as a feasible and intriguing common-place display alternative for many situations and applications.

New display form factors are changing the ways people work and live. For example, solid-state lighting solutions are increasingly used for illuminating performances, as in the 2008 Olympic opening ceremony, or bringing more energy-efficiency and lighting flexibility to homes and public spaces. Multi-monitor or projection environments are increasingly used, and it is safe to predict that custom lighting and flexible displays will become regular interior design elements. If every surface in our homes can potentially be a display, mid-air displays can go even further: they can create computer-generated imagery in open spaces, inviting people to gather around them or, on demand, display important contextual information in doorways and hallways (cf. Fig. 1), and act as reach-through information and augmented reality displays, e.g. presenting an inventory and pointing out hidden items in a fully packed refrigerator or serving as a spatially immersive but translucent and unobtrusive secondary display, which could be embedded into a desk, as depicted in Fig. 2.

Fig. 1
figure 1

A FogScreen providing contextual walk-through information at an airport

Fig. 2
figure 2

The FogScreen could serve as a secondary display on a desktop environment, enabling a large virtual desktop, while being translucent and penetrable, thus being suitable for offices

Such a mid-air screen is unbreakable, as it recovers automatically and immediately when penetrated. It also stays clean and hygienic, and enables dual-sided displays where the two sides of the content do not interfere with each other.

This paper discusses especially the impact which mid-air displays may have on novel user interfaces. One of its major contributions is an outlook to our mid-air UI experiments. We try to give here the big picture, while more details of the experiments can be found on the referenced papers. We first discuss emerging display and user interface technologies in general in Section 2. Section 3 describes the mid-air screen technology and Section 4 presents and discusses a range of user interfaces we implemented for mid-air screens, each exhibiting a different interface concept applied to basic game-style tasks. From this series of user interface experiments we derive general design guidelines for mid-air display user interfaces, which we present and discuss in Section 5. Conclusions and an outlook on future work and potential for UI design are presented in Section 6.

2 Future user interface and display technologies

What does Fig. 3 depict – a computer? No, it shows only today’s most commonplace computer peripherals for input and output (I/O). The CPU itself can be small and be hidden from view, embedded into the monitor, or wherever convenient. While for the time being this is the accepted iconic representation of a computer, new I/O peripherals from the emergent era of ubiquitous and ambient computing, such as multi-touch displays or screens with embedded webcams are changing the public’s common conception of what constitutes a computer.

Fig. 3
figure 3

Present computer I/O peripherals

As early mainframe computers were extremely expensive, the main concern was to optimize the utilization of computer time. Operators used punch-card input and line-printer output, and human specialists had to adapt to the computer’s batch-mode operation. Later, command line interface and alphanumeric monitors emerged, along with time-sharing and networking, making the operation of computers more interactive. Graphical user interfaces (GUIs) with windows, icons, menus, and pointing (a.k.a. WIMP) became the third generation of user interfaces, revolutionizing office and home office desktop environments.

The WIMP GUI makes good use of the human visual channel for output and some use of the tactile channel for input. GUIs made computers sufficiently easy for everyone to use, and PCs became pervasive. However, the mouse is an invention from the 1960s, and not intended, nor very practical, for mobile computing. The keyboard was designed in the 19th century, with computers not even on the horizon. It is a clumsy input device for most tasks apart from text input, and even for that there are situations where alternative input methods, such as speech recognition, might be considered more appropriate and user-friendly, once they reach a certain level of robustness.

The fundamental properties of computers have changed radically over the years, but most commercial human-computer interaction (HCI) technologies have not fundamentally changed for over two decades. Mobile devices and the gradual emergence of ubiquitous computing make such changes necessary. Traditional UI techniques such as the desktop metaphor do not scale well to diverse form factors, locations, and uses of pervasive computing and ambient media. The user interfaces must evolve with the changing context of computing.

Weiser’s original vision of ubiquitous computing [19] talks about the disappearing computer. As processors become increasingly low-cost, we are being surrounded by numerous embedded processors, instead of using only a single processor.

Slightly adapting one of Weiser’s famous projections on the major trends in computing [20], we can observe and extrapolate a similar trend regarding the number of displays per user: We are witnessing a development towards “one-person-many-displays” environments, where the displays are not necessarily linked to specific computers anymore. Displays may start to break free from their association with a particular controlling CPU, and, as predicted by Weiser, “invisible” computers without displays have started to permeate physical environments.

One element in Weiser’s vision was the emergence of displays in various form factors, for example hand-held, tablet-sized, wall-sized, or a combination of them. Displays would be everywhere or nowhere, depending on the application, context or environment. An explicit user interface may not be needed at all, or it may adapt to and make use of the environment and available displays.

Emerging display technologies provide many new features, which consequently trigger and influence novel types of user interfaces. Today we employ many embedded processors, e.g. in cars and home appliances, even without knowing it and most of them do not have any dedicated displays. There can also be many displays per user in special environments such as intelligent rooms, and the future might bring numerous displays of various types surrounding us at the office, at home, and in public spaces.

Today’s pocket-sized devices, which most people already carry, have only a relatively small dedicated display, but could additionally connect wirelessly to any number of available, surrounding displays of varying sizes, as needed. Thin solar-powered, wireless OLED displays, or possibly pico-projectors could be scattered around in any number and formation as the user wishes, like digital post-it notes or wallpaper. Such ubiquitous display environments may become available in public and private places if display technology becomes sufficiently low-cost.

Indeed, displays are rapidly developing, being produced at lower cost and/or providing advanced properties not possible ever before. They are being employed over a wider range of applications and many contemporary guidelines for device use and user interface design will consequently change. The price of some of the displays may go down ultimately even to the point where they are as low-cost as printed paper. Many visual objects such as price tags, magazines or large-scale outdoor advertisements are usually painted or printed nowadays, mostly due to the fact that the manufacturing is cheaper that way, accepting the inherent tradeoff of non-alterable imagery.

Ultra-affordable displays would change the rules of the game. Display tapestries could replace paint some day, and low-cost immersive virtual reality rooms at people’s homes could become reality. Displays could be embedded into desks, doors, clothing, streets and sidewalks, traffic signs, or paper and napkins, and thus enable pervasive, personalized information and messages. Ultimately, displays may become as cheap as to be disposable and form ad-hoc wireless communication structures with equally disposable RFID tags or other sensor and computing particles. Embedded customized architectural lighting on walls, floors, ceilings or furniture may soon enter our homes and herald a new era of change-by-the-mood living.

We have seen a visual explosion in the last 30 years. Many common-place things around us, such as magazines, TVs, indoor and outdoor advertisements, lighting, projectors, computers, game consoles, 3D graphics and videos that feature rich and colorful visuals have become more personalized and expressive. The advent and maturing of displays and digital technologies will bring our ambient visual scenery to unprecedented levels.

The world’s display market has been growing solidly even in today’s difficult economic climate, and is currently getting close to being valued at $100 billion. Emerging display technologies offer advanced features and can sometimes compete with lower-cost traditional displays if the added value or demand is sufficiently big. In the future, we will probably see many types, forms, sizes and technologies of displays to be used for a variety of applications. There will hardly be a universal display type for all possible purposes. Instead, we may use the most suitable display at hand for the given task. In the rest of this paper, we are looking at a particular family of novel interactive displays: immaterial mid-air screens.

3 Mid-air display technology

We are interested in a class of “walk-through” displays that look and feel immaterial to the viewer, enabling the viewer to reach or walk through them. Various stereoscopic, autostereoscopic, volumetric, holographic, and special effect screens [1] can give an illusion of objects hovering in mid-air, but they are not truly walk-through displays. There are numerous walk-through projection screens using water, smoke, fog or cryo-fog. The earliest example is the Ornamental fountain [10] dating back to the end of the 19th century. More recently, water screens in installations such as the Jeep Waterfall, the Aquatique Show, and Disney’s Fantasmic, spray sheets of freely flowing or pressurized water from nozzles. The magnitude and wetness of these screens make them impractical for indoor, walk-through, or small-scale desktop applications. However, many of these water screens may look spectacular if viewed from afar and on-axis in the dark. With the advent of dry and high image quality FogScreens™, walk-through displays are becoming applicable for wider exploitation.

The FogScreen™ [6, 15] is a patented technology that can form a high-quality projected mid-air image on a flat “immaterial” image plane consisting of flowing water particles so thinly dispersed as to form dry fog (see Figs. 4 and 5). A surrounding non-turbulent airflow protects the injected thin particle flow from turbulence. As the inner fog flow forms a thin fog plane, it enables high-quality projections and a dry walk-through experience.

Fig. 4
figure 4

Three-dimensional hand bones hovering in thin air

Fig. 5
figure 5

The FogScreen creates a mid-air, walk-through image

The mid-air FogScreen has some advantages (as well as disadvantages) compared to most other displays. Unlike other screens, it does not restrict the viewer from reaching or walking through the screen, which continuously and immediately recovers its flat-screen planar shape when penetrated. The screen feels dry to the touch, thus further enhancing the immaterial effect. The mid-air display is visually intriguing and the screen is unbreakable.

The FogScreen requires rear-projection, since the vaporized water particles primarily scatter the light through the screen, rather than reflecting it. The screen can be made opaque or nearly transparent, with only bright image areas becoming visible. A dark backdrop is recommended for the best effect.

The screen can also work in a dual-sided fashion so that different content can be projected onto either side of the screen, without any blending if the lighting is carefully controlled. Thus, one side of the screen could say “welcome” and the other side “goodbye”. The opposing viewers see their side of the screen but also each other through it, and can even walk through it.

The resolution of the resulting image is currently not as high as with traditional screens, but it works well for most applications in fields such as advertising and entertainment, and, as the screen quality keeps improving, strives towards supporting detailed information display. The resolution degrades significantly if the viewing or projection angles are very oblique [14], as the FogScreen image plane has a thickness of about 1 cm and adjacent pixels blend with each other from such vantage points. The usable screen height is typically 1–2 meters. Also brightness reduces with oblique viewing or projection angles [14], which is a problem particularly for virtual reality setups, where the viewer can freely move her vantage point.

When the user stands close to the screen, the image quality is reduced, especially towards the sides of the screen as neighboring pixels blend together there. The effect is not so pronounced on smaller screen sizes. According to the manufacturer, there are improvements underway to make the flow more laminar, which would consequently improve image quality.

Some special attention should be given to visual content design and screen installation. A few simple visual design guidelines have been listed in [17].

Interaction with mid-air 2D or 3D graphics objects can be implemented with suitable tracking and sensing technologies, so that the user can directly touch the objects by hand or alternatively with a hand-held pointer. An essential component for the interaction is the tracking of the viewer’s hand. We have used several tracking systems to enable 2D and 3D interaction for the FogScreen [5]. All 3D interfaces used an optical tracker [21] that could track 3D position of hand-held infrared LED around the screen and thus simulate direct hand-based interaction.

FogScreen technology is regularly used world-wide for visual effects in a variety of venues, events and trade shows – either on stage or as a captivating experience for roaming audiences. The FogScreen devices are currently available as a fixed 2-meter-wide (100” diagonal) projection screen, or as a linkable 1-meter-wide (50” diagonal) projection screen, which enables wider screens.

4 Mid-air user interfaces

Mid-air displays are a fundamentally new concept for the general public and also for most audiovisual professionals [16]. Using such displays to their true potential is non-trivial, especially if designers are trained in more conventional audio visual technology. If an immaterial display is used only as an ordinary projection screen to e.g., view movies, the essence of it is wasted. In this section, we present experimental evidence of successful and not so successful user interfaces we developed and tested with FogScreens in order to help future interface designers utilize the advantages and true interaction potential of this novel display type.

Based on our initial experiences designing interfaces for large immaterial displays, we would like to emphasize two general interaction guidelines upfront, which repeatedly played to the strengths of this display type: direct screen-based manipulation, and multi-user collaboration.

Direct screen-based manipulation

Upon seeing an immaterial display, users’ first reaction is generally to reach out and touch the image and try to play with it. This is in line with similar observations regarding enclosed volumetric displays [7], but the effect is even more pronounced here since the immaterial nature of the display without any shielding obstruction directly invites probing touch. We take advantage of this instinctive response by allowing users to directly manipulate applications in an intuitive way with their hands. By removing indirection and abstraction in the interface, we can increase the sense of integration between the virtual objects and the physical environment. This ability to allow unencumbered users to directly place their hands inside the screen space to interact with objects is unique to immaterial displays.

Multi-user collaboration

Since immaterial displays are generally transparent, they are capable of maintaining unimpeded face-to-face communication among multiple people on both sides of the screen (Fig. 6). This is advantageous for large collaborations, making discussion, sharing of materials, and physical contact easier for people situated around the display, while a traditional display (even a transparent one) would fully separate users and impede collaboration. This principle puts a focus on support for multiple users in our interfaces.

Fig. 6
figure 6

Two users interact with the application simultaneously from opposite sides of the screen without obstructing one another, maintaining personalized views of the shared workspace and face-to-face contact

Heeding these two principles, our interface explorations are directed towards multi-user 3D manipulation interfaces on immaterial displays beyond simply using tracked input devices as mouse-replacements for standard 2D GUI interaction. Our work is the first step in exploring direct manipulation interface possibilities for interactive immaterial display systems.

2D projection

Our simplest test interface was restricted to multi-user 2D interaction. We used a physically intuitive rigid-body simulation, where users controlled paddles to hit different objects that would collide and bounce realistically (cf. Fig. 8a) in the space. The simulation used 3D objects, but all force vectors were clamped to the z = 0 plane to remove the depth component from the interaction. We used our screen’s dual-sided rendering capability to show both sides of the virtual scene, and orthographic projection to keep cross-talk minimal. The dual-sidedness allowed more simultaneous users to interact with the scene without obstructing each other’s actions. This allowed the users to easily play from opposite sides of the screen, enabling face-to-face competition (cf. Fig. 6).

We confirmed our assumptions that users would have no difficulty in learning to use this system and they were manipulating the objects with ease. The lack of depth input did not bother users at all. If anything, it made users more at ease, since they initially preferred to stand just outside arm’s reach of the screen (likely more due to social inhibitions from the crowded demo atmosphere).

However, there was some small confusion about how interaction from a distance worked – some users expected to be able to point from a static position to move their paddle around, assuming a six-degree-of-freedom (6DOF) input device. Once they realized that was not the case, it was easily seen that actual 2D hand motion corresponded directly to paddle motion on the screen. While 6DOF pointing selection has been explored at length in the 3D user interface community, the reach-through interaction allowed by an immaterial display makes its extension to our interfaces awkward. Instead, we find that direct 3D positioning is easier to understand with immaterial interfaces. This was confirmed in our further experiments on gaming with immaterial screens [9, 18]. Touching objects directly was engaging for the players and resulted in physical exercise, similar to e.g., Nintendo Wii Sports games. The game depicted in Fig. 7 for example required the gamer to react quickly and touch moving objects, which appeared on the screen in varying shapes, sizes, lifetimes and speeds.

Fig. 7
figure 7

A gamer playing with the physical exercise 3D game, and the layout of the reach-through physical game. The appearing 3D objects have various sizes, forms, lifetimes and speeds

2D touch screen

Our next interface used the depth coordinate as a way to enable touchscreen-like interaction. Users held their hand away from the screen to move around the screen, and when they approached to within a depth threshold, a click and drag action was initiated until they moved further away again. This interface was used for a simple game (Consigalo) in which each user had to collect falling objects of a particular color and move them to their respective goal zones on the side of the screen (cf. Fig. 8b). The game, setup and audience feedback is described in depth by Olwal et al [12]. Dual-sided rendering allowed multiple players to interact easily in the shared space without obstructing one another, while traditional displays would force the users to constantly reach across each other to grab objects from all over the screen.

Fig. 8
figure 8

Some of our applications which were demonstrated at ACM SIGGRAPH 2005 (screenshots rendered in high quality to show more clearly what users see). From left to right: a Rigid body simulator with projected 2D interaction b Consigalo touchscreen game with depth-based action triggering c Virtual forest navigator with 3D velocity control

Since the interaction required users to be close to the screen, they adapted quickly and did not seem to mind standing near it as they did with the rigid body simulator. This had the added benefit of removing the tendency to try to point with the markers. The grabbing action of moving along the z-axis was very intuitive as it reinforced the notion of objects floating in the screen’s plane and grabbing happened when the user actually ’physically’ touched the objects.

3D velocity control

To explore the display as a portal to a virtual environment, we used a forest and terrain rendering program (Fig. 8c) with game-style joystick input. The user held a button down and moved to specify a travel velocity to navigate the environment. The interesting use of the display’s immaterial nature is that the user could walk through the display to the other side, to see the same scene from the opposing view. This could be extended to allow a user to physically walk through a virtual portal to a new environment. Alternately, if the immaterial display were one wall of a CAVE, the user could navigate to a virtual portal that would implicitly select the visualization inside the CAVE.

While the navigation concept in this demo was easy to understand for users (likely because of its prevalence in games), the immaterial display arguably did not enhance a user’s immersion in this setting, compared to a conventional wall-sized display. The use of a physical walking technique with head tracking (cf. Fig. 12) and velocity navigation would likely improve this perception.

3D position input, without feedback

The first experiment with absolute 3D position input was the elastic head deformer (cf. Fig. 9), which rendered a larger-than-life (approximately 4:1 scale) head model that could be stretched and squashed in any direction. It started out as a 2D interface with orthogonal dragging, but we retrofitted it to allow 3D manipulation. When the user’s 3D position touched the surface of the face, a button would cause grabbing and then that part of the face could be dragged around.

Fig. 9
figure 9

The elastic face deformer’s fully 3D interface helps users to control how they distort a virtual head. A 3D cursor allows users to squash or stretch the elastic face, but selection was difficult due to insufficient depth cues

The initial interface was a bit complicated to use, as the spherical cursor controlled by the user’s hand provided only minimal depth cues, due to the lack of perspective in the orthographic projection required for dual-sided rendering.

Even if the cursor’s size was manually adjusted to account for depth, it did not provide a practical way to estimate relative depth from the model, since the cursor was an abstract object with no real size for reference. Additionally, users only received contact feedback from occlusion, which meant that for a spherical cursor, contact would not be apparent until after the cursor had gone about halfway inside the surface. The users did not receive feedback near the surface, so users had to make a more meticulous search of the space before manipulation could occur. This application provides good motivation for extending the pseudo-3D capabilities of the display system with head tracking and stereo projection.

3D position input, with feedback

We attempted to address the problems with the elastic head deformer in another system called the Learning Environment with Multi-Media Augmentations (LEMMA) [3], an interactive multimodal learning application, designed for teaching various kinds of knowledge with 3D visualizations (Fig. 10). A combination of 2D and 3D interaction was implemented. The 3D position of the user’s marker is projected orthogonally onto the screen. Virtual objects are slightly highlighted when the marker is nearby, and fully lit while dwelling, indicating that they can be interacted with. The interface distinguishes between 2D GUI widgets such as buttons or sliders, 3D objects which can be moved arbitrarily in 3D, and vectors which can point in any direction in 3D.

Fig. 10
figure 10

The LEMMA physics tutorial relied partially on the FogScreen, enabling direct interaction with the objects, e.g., to pull force vectors unobtrusively also through the screen

2D widgets are used in a similar way as in traditional GUIs, by hovering over the widget and through clicking and dragging. For 3D objects, a combination of 2D selection and relative 3D movement proved to be most intuitive and convenient. When selected and dragged, the movement of the object is relative to the user’s starting position. This resembles the HOMER technique [2], except that because the display is immaterial, the user’s input can be mapped one-to-one to the object’s motion. Scaling is not necessary, as the user can move through the screen if needed. 3D vectors are also selected in 2D, after which the vector is set to point from the vector’s origin to the user’s absolute position. The control of the vector’s direction is done by moving the vector’s end point (see Fig. 10). Because of the immaterial nature of the screen, the vector can easily be pushed through to point away from the user with a one-to-one absolute mapping, providing a clear way to estimate the vector’s magnitude along the third dimension (see Fig. 11).

Fig. 11
figure 11

The user can move freely and also reach through or walk to the other side of the screen

The improvements over the head deformer were the use of 2D selection and rich feedback such as object highlighting when near, and perspective rendering while interacting to show the 3D result of the manipulation. We evaluated the usability of these techniques in a study with 13 undergraduate physics students with varying levels of experience with 3D interaction. All subjects found the interface very easy to use. With only a very brief explanation, none of the users had a problem understanding the concept and all of them were able to solve the given tasks with ease. The only complaint some of the users mentioned was that they would prefer to use pointing gestures for selection.

Pseudo 3D display using motion parallax

Head-tracking is a powerful method of providing 3D information to the user (cf. Fig. 12). Small head motions provide slight parallax which shows very clearly the depth of the observed scene, making correct 3D positioning possible. During interaction, a foreground object might move completely in front of the screen, no longer intersecting with the plane of the screen at all, but users are still able to effectively find and manipulate it.

Fig. 12
figure 12

The FogScreen can be used for freely accessible mid-air virtual reality spaces, here a 3D model of a cartoony shark, rendered from the perspective of a tracked user. The 3D shark appears fixed in space and the user can walk around it and even stick his head into the shark’s mouth

However, there is more of a learning curve associated with the head-tracked rendering, as users are not accustomed to a type of display that presents a screen-stabilized 3D scene on a planar screen. Proper calibration is critical to create a believable experience – when the calibration was slightly off, it completely distorted users’ perception, making input more difficult than a regular 2D display, as they were fighting with trying to figure out what the image meant instead of focusing on the interaction. With proper calibration and after a short learning curve however, users had little difficulty playing with this interface in 3D.

5 Results

We have demonstrated several types of applications at numerous trade shows, special events and conferences. In our public demos, audiences have interacted with the walk-through screen over extended periods of time and reactions have been enthusiastic throughout. We received many favorable comments and witnessed users being thoroughly engaged in the interaction experience when the interface was simple and direct.

However, the whole concept of interactive mid-air display is so new that some viewers had initial difficulties with grasping the idea of walk-through screens and mid-air interactivity. Some simple instructions turned out to be helpful, such as a text in mid-air saying “touch me” [16]. Some older people were even afraid of the flow and rather walked around the screen. This underlines the importance of well-designed content with any media platform.

Jumisko-Pyykkö et al. [9] examined children’s game experiences between physical gaming on the FogScreen with hand-held pointer interaction and on a conventional desktop computing. Their results underlined that the players were delighted of the novel mid-air gaming environment, its stimulation of physical activity, and intuitiveness. The interaction with the display was however demanding, which was partly due to the interaction devices used. The immaterial screen is less obtrusive than traditional screens and suit well also for fitness and physical exercise purposes such as virtual boxing, fencing, karate or other martial arts, racket sports etc.

Stereoscopic views on the FogScreen can be generated both with active and passive (e.g., polarized) stereoscopy [4]. The stereoscopic effect seems to be pronounced compared with a traditional silvered projection screen, which may be due to the lack of a reference plane, as objects on the FogScreen appear to float in mid-air, instead of being “anchored” in front of the screen plane. The projecting “cones” emanating from the projectors due to ambient fog or haze may contribute to the exaggerated sense of depth. Also autostereoscopic display (without glasses) for a single viewer is possible in a limited experimental setup with depth-fused 3D rendering [11].

As detailed in the last section, head-tracked perspective rendering presents pseudo 3D imagery even without stereoscopy. As the projection plane is 2D, the eye cannot in fact accommodate to the correct distance. Nevertheless, this creates a strong 3D effect of objects floating in thin air. Such an interactive VR mid-air display can be very large, and does not restrict the user from “touching” and interacting with the objects, leading to a more immersive experience.

The FogScreen seems intriguing to all sorts of people and makes them excited about engaging with it. It is an entirely novel media platform for special effects, digital signage, gaming, and other applications. As the image quality of the devices will improve, it can also be used as a high-resolution information display. The development of a true volumetric 3D walk-through display remains an open research issue.

6 Conclusions

Mid-air displays can enhance and bring new dimensions for the viewing experience in many ways. The major advantages of the FogScreen technology are its mid-air and immaterial nature and the walk- or reach-through possibility (with a resulting magic and excitement factor), superior image quality and larger screen compared with earlier particle screens. Additional advantages are the translucent, divergent (or correlated) dual-sided display possibility, which enables e.g., face-to-face interaction for multiple viewers, possibility for direct interaction, and its very intriguing appearance. It is an engaging and immersive experience if carefully designed and executed.

The permeability of the FogScreen enables new imaging, visualization and user interface possibilities. The users occupy the same space as the image and can directly interact with the displayed mid-air objects as well as select and manipulate them in a natural and intuitive manner without physical limits imposed by screens or confined display volumes.

There are numerous forms, constructions, sizes, variations and extensions of the screen that could produce very different kinds of devices, displays, content and applications. Mid-air displays may soon become widely available for location-based advertisement, digital signage and entertainment and in the long term also for consumers at home. They may contribute to transforming the future user interface experience and ambient media. They even seem to be a feasible short-cut technology to create Star Wars –like mid-air displays [13].

We anticipate stereoscopic and user-tracked VR mid-air displays to be very exciting for the future. Mid-air display features can be extended with other visual, auditory or olfactory technologies or be integrated with other kinds of visual displays. Even untethered tactile feedback in thin air might be possible [8], which could improve the sense of presence of virtual objects.

Mid-air displays have initially been a novelty, but in the long term they could become lower-cost and have a big impact also on application areas such as CAD, data visualization, digital signage, tele-presence, tele-immersion and tele-conferencing, simulation and entertainment. These displays enable a user to investigate, try out scenarios and search the information and visual space to develop a better understanding of the underlying information.

More work is needed on the interaction technology, user interface and application possibilities, as well as on usability studies and on creating suitable content for the media platform. Our studies have employed only the FogScreen, but the results should be applicable also for other types of walk-through displays (e.g., water and smoke screens), apart from their wetter features and/or worse image quality.

Recent advances in miniaturization and rapidly decreasing cost of projectors, innovative new sensors and mid-air displays makes it reasonable to envision a not-so-distant future in which intelligent, proactive and responsive mid-air content and displays in our daily environment will function as visualization platforms or control panels.