Advertisement

Gesture-Based Configuration of Location Information in Smart Environments with Visual Feedback

  • Carsten StocklöwEmail author
  • Martin Majewski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9189)

Abstract

The location of objects and devices in a smart environment is a very important piece of information to enable advanced and sophisticated use cases for interaction and for supporting the user in daily activities and emergency situations. To acquire this information, we propose a semi-automatic approach to configure the location, size, and orientation of objects in the environment together with their semantic meaning. This configuration is typically done with graphical user interfaces showing either a list of objects or a representation of objects in form of 2D or 3D virtual representations.

However, there is a gap between the real physical world and the abstract virtual representation that needs to be bridged by the user himself. Therefore, we propose a visual feedback directly in the physical world using a robotic laser pointing system.

Keywords

Smart environments Configuration Personalization 

Notes

Acknowledgements

This work is partially financed by the European Commission under the FP7-ICT-Project Miraculous Life (grant agreement no. 611421).

References

  1. 1.
    Barsocchi, P., Potort, F., Furfari, F., Gil, A.: Comparing AAL indoor localization systems. In: Chessa, S., Knauth, S. (eds.) EvAAL 2011. CCIS, vol. 309, pp. 1–13. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  2. 2.
    Benko, H., Jota, R., Wilson, A.: Miragetable: freehand interaction on a projected augmented reality tabletop. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2012, pp. 199–208. ACM, New York (2012)Google Scholar
  3. 3.
    Braun, A., Fischer, A., Marinc, A., Stocklöw, C., Majewski, M.: Context-based bounding volume morphing in pointing gesture application. In: Kurosu, M. (ed.) HCII/HCI 2013, Part IV. LNCS, vol. 8007, pp. 147–156. Springer, Heidelberg (2013) Google Scholar
  4. 4.
    Braun, A., Heggen, H., Wichert, R.: CapFloor – a flexible capacitive indoor localization system. In: Chessa, S., Knauth, S. (eds.) EvAAL 2011. CCIS, vol. 309, pp. 26–35. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  5. 5.
    Hossain, S., Rahman, A., El Saddik, A.: Bridging the gap between virtual and real with second life client in a virtual home automation system. In: 2011 24th Canadian Conference on Electrical and Computer Engineering (CCECE), pp. 001212–001217, May 2011Google Scholar
  6. 6.
    Koppula, H., Anand, A., Joachims, T., Saxena, A.: Semantic labeling of 3D point clouds for indoor scenes. In: NIPS (2011)Google Scholar
  7. 7.
    Majewski, M., Braun, A., Marinc, A., Kuijper, A.: Visual support system for selecting reactive elements in intelligent environments. In: Proceedings of 2012 International Conference on Cyberworlds, Fraunhofer-Institut für Graphische Datenverarbeitung (IGD) and Technische Universität Darmstadt (TUD) and European Association for Computer Graphics (Eurographics) and IFIP Working Group 5.10 on Computer Graphics and Virtual Worlds, pp. 251–255. IEEE Computer Society Conference Publishing Services (CPS), Los Alamitos (2012)Google Scholar
  8. 8.
    Majewski, M., Dutz, T., Wichert, R.: An optical guiding system for gesture based interactions in smart environments. In: Streitz, N., Markopoulos, P. (eds.) DAPI 2014. LNCS, vol. 8530, pp. 154–163. Springer, Heidelberg (2014) CrossRefGoogle Scholar
  9. 9.
    Marinc, A., Stocklöw, C., Braun, A.: Building up virtual environments using gestures. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2013, Part III. LNCS, vol. 8011, pp. 70–78. Springer, Heidelberg (2013) Google Scholar
  10. 10.
    Marinc, A., Stocklöw, C., Tazari, S.: 3D interaction in AAL environments based on ontologies. In: Wichert, R., Eberhardt, B. (eds.) Ambient Assisted Living. ATSC, vol. 2, pp. 289–302. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  11. 11.
    Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. E77–D(12), 1321–1329 (1994)Google Scholar
  12. 12.
    Stahl, C., Frey, J., Alexandersson, J., Brandherm, B.: Synchronized realities. J. Ambient Intell. Smart Environ. (JAISE) 3(1), 13–25 (2011)Google Scholar
  13. 13.
    Stocklöw, C., Wichert, R.: Gesture based semantic service invocation for human environment interaction. In: Patern, F., de Ruyter, B., Markopoulos, P., Santoro, C., van Loenen, E., Luyten, K. (eds.) AmI 2012. LNCS, vol. 7683, pp. 304–311. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  14. 14.
    Wilson, A., Benko, H., Izadi, S., Hilliges, O.: Steerable augmented reality with the beamatron. In: Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST 2012, pp. 413–422. ACM, New York (2012)Google Scholar
  15. 15.
    Wilson, A., Pham, H.: Pointing in intelligent environments with the world cursor. In: Proceedings of Interact 2003 (2003)Google Scholar
  16. 16.
    Xiong, X., Huber, D.: Using context to create semantic 3D models of indoor environments. In: Proceedings of the British Machine Vision Conference, pp. 45.1–45.11. BMVA Press (2010). doi: 10.5244/C.24.45

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Fraunhofer Institute for Computer Graphics Research IGDDarmstadtGermany

Personalised recommendations