Interactive Gestures for Liver Angiography Operation
The main challenge of creating large interactive displays in the operating rooms (ORs) is in the definition of ways that are efficient and easy to learn for the physician. Apart from traditional input methods such as mouse and keyboard, we have developed a multimodal system with two different vision based human-computer interaction (HCI) systems that can simplify the way surgeons interact with the medical images shown on the LCD display. The purpose of this work is to construct a gesture recognition system with a fast, accurate, and easily attainable method. The first system is a laser pointer interaction framework that supports a 2D stroke gesture interface. The recorded laser gestures are recognized using two different algorithms: dynamic time warping (DTW) and one dollar (1$) recognizer. Our experimental results showed that the DTW algorithm performs better with an overall accuracy of 90 %. The second prototype presents an intuitive HCI to manipulate images using freehand gestures. In order to strengthen the gesture recognition process, the system incorporates contextual information to determine the intent of the user of interacting with the large display. Two cameras are used to observe the surgeon’s hand movements to continuously determine and monitor what the surgeon intends to perform. Experimental results showed that the system accuracy is 95 % for recognition with the effect of contextual integration.
KeywordsGesture recognition Laser pointers Hand gestures
- 2.Munteanu, C., Jones, M., Whittaker, S., Oviatt, S., Aylett, M., Penn, G., Brewster, S., Alessandro, N.: Designing speech and language interactions. In: CHI 2014 Extended Abstracts on Human Factors in Computing Systems, Toronto, Canada (2014)Google Scholar
- 3.Gallo, L., Placitelli, A. P., Ciampi, M.: Controller-free exploration of medical image data: experiencing the kinect. In: Computer-Based Medical Systems (CBMS), Bristol, United Kingdom (2011)Google Scholar
- 7.Wobbrock, J. O., Wilson, A. D., Li, Y.: Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In: Proceedings of the 20th Annual Symposium on User Interface Software and Technology, New York, USA (2007)Google Scholar
- 8.Bradski, G.R.: Real time face and object tracking as a component of a perceptual user interface. In: Proceedings of IEEE Workshop on Applications of Computer Vision, Princeton, NJ (1998)Google Scholar