Advertisement

Applying Saliency-Based Region of Interest Detection in Developing a Collaborative Active Learning System with Augmented Reality

  • Trung-Nghia Le
  • Yen-Thanh Le
  • Minh-Triet Tran
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8526)

Abstract

Learning activities are not necessary to be only in traditional physical classrooms but can also be set up in virtual environment. Therefore the authors propose a novel augmented reality system to organize a class supporting real-time collaboration and active interaction between educators and learners. A pre-processing phase is integrated into a visual search engine, the heart of our system, to recognize printed materials with low computational cost and high accuracy. The authors also propose a simple yet efficient visual saliency estimation technique based on regional contrast is developed to quickly filter out low informative regions in printed materials. This technique not only reduces unnecessary computational cost of keypoint descriptors but also increases robustness and accuracy of visual object recognition. Our experimental results show that the whole visual object recognition process can be speed up 19 times and the accuracy can increase up to 22%. Furthermore, this pre-processing stage is independent of the choice of features and matching model in a general process. Therefore it can be used to boost the performance of existing systems into real-time manner.

Keywords

Smart Education Active Learning Visual Search Saliency Image Human-Computer Interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bonwell, C.C., Eison, J.A.: Active learning: Creating excitement in the classroom. School of Education and Human Development, George Washington University, Washington, DC, USA (1991)Google Scholar
  2. 2.
    Kaufmann, H., Schmalstieg, D., Wagner, M.: Construct3D: A Virtual Reality Application for Mathematics and Geometry Education. Education and Information Technologies 5(4), 163–276 (2000)CrossRefGoogle Scholar
  3. 3.
    Winkler, T., Kritzenberger, H., Herczeg, M.: Mixed Reality Environments as Collaborative and Constructive Learning Spaces for Elementary School Children. In: The World Conference on Educational Multimedia, Hypermedia and Telecommunications (2002)Google Scholar
  4. 4.
    Billinghurst, M., Kato, H., Poupyrev, I.: The MagicBook - Moving Seamlessly between Reality and Virtuality. IEEE Computer Graphics and Applications 21(3), 6–8 (2001)Google Scholar
  5. 5.
    Woods, E., et al.: Augmenting the science centre and museum experience. In: The 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia (2004)Google Scholar
  6. 6.
    Goferman, S., Zelnik-Manor, L., Tal, A.: Context-Aware Saliency Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 34(10), 1915–1926 (2012)CrossRefGoogle Scholar
  7. 7.
    Achanta, R., Hemami, S., Estrada, F., Susstrunk, S.: Frequency-tuned salient region detection. In: 22nd IEEE Computer Society on Computer Vision and Pattern Recognition, pp. 1597–1604 (2009)Google Scholar
  8. 8.
    Ma, Y.-F., Zhang, H.-J.: Contrast-based image attention analysis by using fuzzy growing. In: 11th ACM International Conference on Multimedia, pp. 374–381 (2003)Google Scholar
  9. 9.
    Siva, P., Russell, C., Xiang, T., Agapito, L.: Looking Beyond the Image: Unsupervised Learning for Object Saliency and Detection. In: 26th IEEE Conference on Computer Vision and Pattern Recognition (2013)Google Scholar
  10. 10.
    Qiong, Y., Xu, L., Shi, J., Jia, J.: Hierarchical Saliency Detection. In: 26th IEEE Conference on Computer Vision and Pattern Recognition (2013)Google Scholar
  11. 11.
    Prim, R.C.: Shortest connection networks and some generalizations. Bell System Technical Journal 36(6), 1389–1401 (1957)CrossRefGoogle Scholar
  12. 12.
    Zhang, J., Sclaroff, S.: Saliency detection: A boolean map approach. In: The IEEE International Conference on Computer Vision, ICCV (2013)Google Scholar
  13. 13.
    Cheng, M.-M., et al.: Efficient Salient Region Detection with Soft Image Abstraction. In: IEEE International Conference on Computer Vision (2013)Google Scholar
  14. 14.
    Cheng, M.-M., Zhang, G.-X., Mitra, N.J., Huang, X., Hu, S.-M.: Global contrast based salient region detection. In: 24th IEEE Conference on Computer Vision and Pattern Recognition, pp. 409–416 (2011)Google Scholar
  15. 15.
    Zhai, Y., Shah, M.: Visual Attention Detection in Video Sequences Using Spatiotemporal Cues. In: The 14th Annual ACM International Conference on Multimedia, pp. 815–824 (2006)Google Scholar
  16. 16.
    Hou, X., Zhang, L.: Saliency Detection: A Spectral Residual Approach. In: 20th IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  17. 17.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: Binary robust independent elementary features. In: 11th European Conference on Computer Vision, pp. 778–792 (2010)Google Scholar
  18. 18.
    Leutenegger, S., Chli, M., Siegwart, R.Y.: BRISK: Binary Robust Invariant Scalable Keypoints. In: 13th IEEE International Conference on Computer Vision (ICCV), pp. 2548–2555 (2011)Google Scholar
  19. 19.
    Lowe, D.: Distinctive Image Features from Scale Invariant Keypoints. International Journal of Computer Vision 20(2), 91–110 (2004)CrossRefGoogle Scholar
  20. 20.
    Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: SURF: Speeded Up Robust Features. In: 9th European Conference on Computer Vision, pp. 404–417 (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Trung-Nghia Le
    • 1
    • 2
  • Yen-Thanh Le
    • 1
  • Minh-Triet Tran
    • 1
  1. 1.University of Science, VNU-HCMHo Chi Minh cityVietnam
  2. 2.John von Neumann InstituteVNU-HCMHo Chi Minh cityVietnam

Personalised recommendations