Advertisement

Model-Based Multi-touch Gesture Interaction for Diagram Editors

  • Florian Niebling
  • Daniel Schropp
  • Romina Kühn
  • Thomas Schlegel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8511)

Abstract

Many of todays software development processes include model-driven engineering techniques. They employ domain models, i.e. formal representations of knowledge about an application domain, to enable the automatic generation of parts of a software system. Tools supporting model-driven engineering for software development today are often desktop-based single user systems. In practice though, the design of components or larger systems often still is conducted on whiteboards or flip charts. Our work focuses on interaction techniques allowing for the development of gesture-based diagram editors that support teams in establishing domain models from a given meta-model during the development process. Users or groups of users are enabled to instantiate meta-models by free-hand or pen-based sketching of components on large multi-touch screens. In contrast to previous work, the description of multi-touch gestures is derived directly from the graphical model representing the data.

Keywords

Multi-touch gestures model-based development 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Alvarado, C., Davis, R.: Sketchread: A multi-domain sketch recognition engine. In: Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST 2004, pp. 23–32. ACM, New York (2004)Google Scholar
  2. 2.
    Anthony, L., Wobbrock, J.O.: A lightweight multistroke recognizer for user interface prototypes. In: Proceedings of Graphics Interface 2010, GI 2010, pp. 245–252. Canadian Information Processing Society, Toronto (2010)Google Scholar
  3. 3.
    Anthony, L., Wobbrock, J.O.: $N-protractor: A fast and accurate multistroke recognizer. In: Proceedings of Graphics Interface 2012, GI 2012, pp. 117–120. Canadian Information Processing Society, Toronto (2012)Google Scholar
  4. 4.
    Brand, C., Gorning, M., Kaiser, T., Pasch, J., Wenz, M.: Development of High-Quality Graphical Model Editors. Eclipse Magazine (2011)Google Scholar
  5. 5.
    Chen, Q., Grundy, J., Hosking, J.: An e-whiteboard application to support early design-stage sketching of uml diagrams. In: Proceedings of the 2003 IEEE Conference on Human-Centric Computing, pp. 219–226. IEEE CS Press (2003)Google Scholar
  6. 6.
    Damm, C.H., Hansen, K.M., Thomsen, M.: Tool support for cooperative object-oriented design: Gesture based modelling on an electronic whiteboard. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2000, pp. 518–525. ACM, New York (2000)Google Scholar
  7. 7.
    Fuhrmann, H.A.L.: On the Pragmatics of Graphical Modeling. Kiel Computer Science series. Books on Demand (2011)Google Scholar
  8. 8.
    Hart, S.G., Staveland, L.E.: Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Human Mental Workload 1(3), 139–183 (1988)CrossRefGoogle Scholar
  9. 9.
    Khandkar, S.H., Maurer, F.: A language to define multi-touch interactions. In: ACM International Conference on Interactive Tabletops and Surfaces, ITS 2010, pp. 269–270. ACM, New York (2010)Google Scholar
  10. 10.
    Levenshtein, V.: Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady 10, 707 (1966)MathSciNetGoogle Scholar
  11. 11.
    Li, Y.: Protractor: A fast and accurate gesture recognizer. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2010, pp. 2169–2172. ACM, New York (2010)Google Scholar
  12. 12.
    Plimmer, B., Freeman, I.: A toolkit approach to sketched diagram recognition. In: Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI...But Not As We Know It - Volume 1, BCS-HCI 2007, pp. 205–213. British Computer Society, Swinton (2007)Google Scholar
  13. 13.
    Rubel, D., Wren, J., Clayberg, E.: The Eclipse Graphical Editing Framework (GEF). Eclipse, Addison-Wesley (2011)Google Scholar
  14. 14.
    Rubine, D.: Specifying gestures by example. SIGGRAPH Comput. Graph. 25(4), 329–337 (1991)CrossRefGoogle Scholar
  15. 15.
    Sangiorgi, U.B., Barbosa, S.D.J.: Sketch: Modeling using freehand drawing in eclipse graphical editors. In: Proceedings of the FlexiTools Workshop (May 2010)Google Scholar
  16. 16.
    Scharf, A.: Scribble - a framework for integrating intelligent input methods into graphical diagram editors. In: Software Engineering 2013 Workshopband (inkl. Doktorandensymposium), pp. 591–596 (February 2013)Google Scholar
  17. 17.
    Wobbrock, J.O., Wilson, A.D., Li, Y.: Gestures without libraries, toolkits or training: A $1 recognizer for user interface prototypes. In: Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, UIST 2007, pp. 159–168. ACM, New York (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Florian Niebling
    • 1
  • Daniel Schropp
    • 1
  • Romina Kühn
    • 1
  • Thomas Schlegel
    • 1
  1. 1.Institute of Software- and Multimedia-TechnologyTechnische Universität DresdenDresdenGermany

Personalised recommendations