Stroke-Based Semi-automatic Region of Interest Detection Algorithm for In-Situ Painting Recognition
In the case of illumination and view direction changes, the ability to accurately detect the Regions of Interest (ROI) is important for robust recognition. In this paper, we propose a stroke-based semi-automatic ROI detection algorithm using adaptive thresholding and a Hough-transform method for in-situ painting recognition. The proposed algorithm handles both simple and complicated texture painting cases by adaptively finding the threshold. It provides dominant edges by using the determined threshold, thereby enabling the Hough-transform method to succeed. Next, the proposed algorithm is easy to learn, as it only requires minimal participation from the user to draw a diagonal line from one end of the ROI to the other. Even though it requires a stroke to specify two vertex searching regions, it detects unspecified vertices by estimating probable vertex positions calculated by selecting appropriate lines comprising the predetected vertices. In this way, it accurately (1.16 error pixels) detects the painting region, even though a user sees the painting from the flank and gives inaccurate (4.53 error pixels) input points. Finally, the proposed algorithm provides for a fast processing time on mobile devices by adopting the Local Binary Pattern (LBP) method and normalizing the size of the detected ROI; the ROI image becomes smaller in terms of general code format for recognition, while preserving a high recognition accuracy (99.51%). As such, it is expected that this work can be used for a mobile gallery viewing system.
KeywordsSemi-automatic ROI Detection Hough-transform Planar Object Recognition Local Binary Pattern
Unable to display preview. Download preview PDF.
- 1.Layar, http://www.layar.com/
- 2.Wikitude, http://www.wikitude.org/
- 3.Lee, Y., Oh, S., Shin, C., Woo, W.: Ubiquitous Virtual Reality and Its Key Dimension. In: International Workshop on Ubiqiutous Virtual Reality 2009, pp. 5–8 (2009)Google Scholar
- 4.Shin, C., Kim, H., Kang, C., Jang, Y., Choi, A., Woo, W.: Unified Context-aware Augmented Application Framework for Supporting User-Driven Mobile Tour Guide. In: 8th International Symposium on Ubiquitous Virtual Reality 2010, pp. 52–55 (2010)Google Scholar
- 5.Kim, H., Woo, W.: Real and Virtual Worlds Linkage through Cloud-Mobile Convergence. In: Workshop on Cloud-Mobile Convergence for Virtual Reality, pp. 10–13 (2010)Google Scholar
- 6.Google Goggles, http://www.google.com/mobile/goggles/
- 7.Andreatta, C., Leonardi, F.: Appearance Based Paintings Recognition For a Mobile Museum Guide. In: International Conference on Computer Vision Theory and Applications, VISAPP 2006 (2006)Google Scholar
- 8.Ruf, B., Kokiopoulou, E., Detyniecki, M.: Mobile Museum Guide Based on Fast SIFT Recognition. In: 6th International Workshop on Adaptive Multimedia Retrieval (2008)Google Scholar
- 10.Intel Open Source Computer Vision Library, http://sourceforge.net/projects/opencvlibrary/