Guest Editorial: Augmented Reality Based Framework for Multimedia Training and Learning
- 1.1k Downloads
We are delighted to present this special issue of Multimedia Tools and Applications on “AR based Framework for Multimedia Training and Learning”. The last years have seen growing hype about Augmented/Mixed Reality pushed by announcements of new dedicated devices such as Google Glass, just to mention the most known, claiming the ability to augment the vision field with context-depending contents, eventually co-registered to the real world. It is clear enough that these technologies, apart from technical limitations still concerning tracking accuracy/robustness, or field-of-view wideness, really have a great potential for a broad range of applicative fields and particularly for multimedia training and learning which could finally move from the computer space to the real world. To this regard, the availability of more and more powerful mobile platforms (multicore/multi-sensors smartphones and tablets) also represents a great opportunity for researchers and developers, as the ubiquity of these devices opens up exciting new scenarios.
The purpose of this special issue is to explore all aspects of the AR/MR universe, for instance, indoor/outdoor tracking approaches, interaction with augmented contents, visually believable integration of virtual objects into a real environment, multimedia augmentation, etc. The guest editors selected seven contributions on topics highlighting the potential of Augmented/Mixed Reality in light of the last generation of digital devices or even innovative applications of this technology. They are briefly summarized below.
In “The Constrained SLAM framework for non-instrumented Augmented Reality: application to industrial training” ( 10.1007/s11042-015-2968-8) the authors M. Tamaazousti, S. Naudet-Collette, V. Gay-Bellile, S. Bourgeois, B. Besbes and M. Dhome proposes propose a solution to unify notable SLAM-based localization methods in a single framework called constrained SLAM. It proposes a real-time camera localization in partially-known environments, i.e., for which a geometric 3D model of one static object in the scene is available. The authors conducted several validation tests to assess the strengths and limitations of the proposed approach in both controlled and uncontrolled scenarios, as well as indoor and outdoor environments. Moreover, an interactive AR application prototype for industrial education and training applications supported by a preliminary evaluation is described.
The paper “Enabling Consistent Hand-Based Interaction in Mixed Reality by Occlusions Handling” ( 10.1007/s11042-016-3276-7) by F. Narducci, S. Ricciardi and R. Vertucci, describes a complete system enabling reliable finger-based interaction and real-time hand occlusion management in MR environments by using computer-vision techniques. It discusses how avoiding using specialized hardware exploiting the stereo matching belief propagation algorithm to detect and estimate the orientation of a user’s hand in the augmented environment. The goals of the proposed system are to achieve a visually accurate segmentation of user’s hands, a distance-based occlusion management, and a user-friendly interaction. The subjective system evaluation highlights the visual consistence and the real-time performance provided by the proposed approach. On the other side, it also emphasizes the drawbacks of using low field-of-view HMDs as well as the limitations of refresh rate of common cameras used for augmenting the scene.
Visual quality of AR enhanced field-of-view is also central in “Exploring legibility of augmented reality X-ray” ( 10.1007/s11042-015-2954-1), by M. E. Chavez-Santos, I. de Souza-Almeida, G. Yamamoto, T. Taketomi, C. Sandor and H. Kato. In this work, the authors compare two augmented reality x-ray visualization methods. To position a virtual object inside an object, AR x-ray requires partially occluding the virtual object with visually important regions of the real object. In effect, the virtual object becomes less legible compared to when it is fully visible. Legibility is an important consideration for various applications of AR x-ray. In this research, they explored legibility in two implementations of AR x-ray namely, edge-based and saliency-based. From their experiments, they observed that users have varied preferences for proper amounts of occlusion cues for both methods and the insights from their research can be directly applied to the development of AR x-ray applications.
In “Augmenting Human Senses to Improve the User Experience in Cars: Applying Augmented Reality and Haptics Approaches to Reduce Cognitive Distances” ( 10.1007/s11042-015-2712-4), S. Kim and A. K. Dey investigate exploiting AR in the automotive context by seeking novel ways to represent information to minimize perceptual and cognitive processing workloads during HCI. The paper explores how technology-driven sensory augmentation impacts human perception and cognition. The experimental analysis and results discuss in detail how sensory augmentation systems facilitate human cognitive processing capability (i.e., visual attention and cognitive load), further highlighting the effects in two demographic populations with different cognitive abilities (young vs old) and then presents implications of these effects, which are specific to each population, to improve the design of sensory augmentation systems.
The goal of “Training emergency responders through augmented reality mobile interfaces” ( 10.1007/s11042-015-2955-0) by M. Sebillo, G. Vitiello, L. Paolino and A. Ginige is to face the general concern of training emergency responders to become familiar with the mobile technology adopted to perform their tasks. As a matter of fact, the usage of IT has undoubtedly improved the whole emergency management process by providing decision makers and on-site operators with profitable tools to support their activities. However, in crisis situations where people are under stress, the use of emergency information systems is often hindered by the lack of familiarity with them. Moreover, the importance of appropriate training is widely recognized by all the actors playing a role in the emergency domain, where the collaboration among responders and an increased situation awareness about crisis evolution are key factors to reduce human and property losses. The proposed AR-based training system combines the pervasiveness of mobile technology and the intuitiveness of interaction supported by an augmented reality to the aim of motivating trainees and improving their situation awareness through a low cost “ubiquitous learning”. Using two different interactive visualization modalities, the training system leads trainees within a scenario enriched by virtual content where data can be aggregated and associated with visual metaphors. Their interaction produce both analytical and synthetic effects, thus contributing to build trainees’ personal mobile experience with new technologies.
In “Improving performance on object recognition for real-time on mobile devices” ( 10.1007/s11042-015-2999-1) by J. C. Piao, H. S. Jung, C. P. Hong and S. D. Kim, the authors describe a method to improve the performance of object recognition, even on mobile systems, based on three different approaches. The first approach aims at reducing the execution time of critical processes that act as bottlenecks in the overall algorithm. To this purpose, the method first analyzes major components to evaluate their execution times, obtaining a log file. Based on this result, the functions called in the most time-intensive components are optimized. The second method consisting of applying parallel processing through selected OpenCV functions already designed for supporting parallel processing. Finally, the idea behind the third approach is to change the granularity of the unit task required for the recognition process by trading quality to improve performance. Experimental results show that the combined methods can provide a significantly lower execution time over the original algorithm while preserving the accuracy requirement, making it particularly suited to the mobile environment.
Finally, the paper “DiedricAR: A mobile Augmented Reality system designed for the ubiquitous Descriptive Geometry learning” ( 10.1007/s11042-016-3384-4) by E. G. de Rave, F. J. Jimenez-Hornero, A. B. Ariza-Villaverde and J. Taguas-Ruiz, present a Mobile Augmented Reality System (MARS) called DiedricAR aimed to AR-aided teaching of the Dihedral System and, more in general to improving students’ spatial ability. In this work, the main objective is advancing over some existing AR systems for learning Descriptive Geometry, but the paper also explores other relevant related topics such as AR software execution and stability on several devices and the relationship between application design and user experience.
It is the guest editors’ opinion that the aforementioned contributions provide a valuable insight of the diverse range of issues currently being investigated in the field of augmented and mixed reality and particularly on mobile devices, including both research and application aspects that hopefully will stimulate further development on the subject matter. We hope you enjoy this special issue and take some inspiration from it for your own future research.