Advertisement

Applied Intelligence

, Volume 49, Issue 2, pp 319–334 | Cite as

Cloud robot: semantic map building for intelligent service task

  • Hao Wu
  • Xiaojian Wu
  • Qing MaEmail author
  • Guohui Tian
Article
  • 141 Downloads

Abstract

When a robot provides intelligent services, it needs to obtain a semantic map of the complex environment. The robot’s vision is commonly used to obtain the semantic concepts and relations of objects and rooms in indoor environments, which are labeled semantic information on the map. In an actual indoor environment, because of the great variety of objects and complex arrangements, a key problem is building a semantic map successfully in which the scale of the semantic database is large and the query speed is highly efficient. However, this is often a difficult problem to solve. Combined with cloud technology, the semantic acquisition structure of an environment based on the cloud is constructed by designing a cloud semantic database; the cloud robot can not only obtain the geometric description of the environment but also obtain the semantic map that contains the objects’ relationships based on a rich semantic database of the complex environment. It solves the problems of low-reliability when adding semantic information, errors in updating the map and the lack of scalability in the process of constructing the semantic map. An SVM (Support Vector Machine) algorithm is used to classify the semantic subdatabase on the foundation of which the feature model database is formed by extracting key feature points based on network text classification. Combining the semantic subdatabase with the semantic classification list, the objects can be quickly identified. Based on the abundant cloud semantic database, the cloud semantic map for intelligent service tasks can be implemented. The classification efficiency of the simulated experiments in the semantic database is analyzed, and the validity of the method is verified.

Keywords

Intelligent service task Semantic map SVM Cloud database 

Notes

Acknowledgements

This paper is supported by the Natural Science Foundation of Shandong Province ZR2015FM007 and ZR2017MF014, Shandong major research plan Project 2015GGX103034 and 2015ZDXX0101F03 National Natural Science Foundation of China 61573216, and the Taishan Scholars Program of Shandong Province.

References

  1. 1.
    Vasudevan S, Gächter S, Nguyen V, Siegwart R (2001) Cognitive maps for mobile robots-an object based approach. Robot Auton Syst 55(5):359–371CrossRefGoogle Scholar
  2. 2.
    Jebari I, Bazeille S, Battesti E, Tekaya H, Klein M, Tapus A et al (2011) Multi-sensor semantic mapping and exploration of indoor environments. In: 2011 IEEE conference on technologies for practical robot applications, pp 151–156Google Scholar
  3. 3.
    Pei-Liang WU, Kong LF, Zhao FD (2010) Research on constructing of household holographic map for service robot. Appl Res Comput 27(3):981–985Google Scholar
  4. 4.
    Wu H, Tian GH, Li Y, Zhou FY, Duan P (2014) Spatial semantic hybrid map building and application of mobile service robot. Robot Auton Syst 62(6):923–941CrossRefGoogle Scholar
  5. 5.
    Wu H, Tian G, Chen X, Zhang T, Zhou F (2010) Map building of indoor unknown environment based on robot service mission direction. Robot 32(2):196–203CrossRefGoogle Scholar
  6. 6.
    Wang F (2014) Semantic mapping for domestic service robots. University of Science and Technology of ChinaGoogle Scholar
  7. 7.
    Zhao WW (2014) Multi-layer semantic map building of mobile robot based on monocular vision. Beijing University of TechnologyGoogle Scholar
  8. 8.
    Tao ZB (2014) Reasearch of unknown environment exploration and indoor 3D semantic mapping. Jiangnan UniversityGoogle Scholar
  9. 9.
    Wang GQ (2013) The research of semantic map updates and robot localization based on RFID and laser sensor. Shandong UniversityGoogle Scholar
  10. 10.
    Wu H, Tian GH, Duan P et al (2013) Navigation information description of large unknown environment based on RFID technology[J]. J Central South Univ Sci Technol S1:166–170Google Scholar
  11. 11.
    Yaron A, Hatzubai A, Davis M, Lavon I, Amit S, Manning AM et al (2013) Textual signs reading for indoor semantic map construction. Int J Comput Appl 53(10):36–43Google Scholar
  12. 12.
    Nieto-Granda C, Rogers JG, Trevor AJB, Christensen HI (2010) Semantic map partitioning in indoor environments using regional analysis. In: Ieee/rsj International conference on intelligent robots and systems, vol 9, pp 1451–1456Google Scholar
  13. 13.
    Baumeister J, Reutelshoefer J, Puppe F (2011) KnowWE: a semantic Wiki for knowledge engineering. Appl Intell 35(3):323–344CrossRefGoogle Scholar
  14. 14.
    Rogers JG, Christensen HI (2013) Robot planning with a semantic map. In: Proceedings of IEEE international conference on robotics and automation, pp 2239–2244Google Scholar
  15. 15.
    Kasprzak W, Stefańczyk M (2012) 3D semantic map computation based on depth map and video image. Lect Notes Comput Sci 7594(11):441–448CrossRefGoogle Scholar
  16. 16.
    Tuffery P, Lacroix Z, Menager H, 2006 Semantic map of services for structural bioinformatics. In: Proceedings of the 18th international conference on scientific and statistical database management, pp 217–224Google Scholar
  17. 17.
    Xu NX, Zou HM (2008) Constructing semantic library to reflect word interrelationship. J Shanghai JiaoTong Univ 42(7):1129–1132MathSciNetGoogle Scholar
  18. 18.
    Li PJ, Zhang XM, Shen JL (2015) Web service matching algorithm based on dynamic trust-based semantic library. Comput Eng Des 2015(3):652–657Google Scholar
  19. 19.
    Mizell D (2016) Dynamic graph system for a semantic database. U.S. Patent 20,150,138,206Google Scholar
  20. 20.
    Zheng B, Yuan NJ, Zheng K, et al (2015) Approximate keyword search in semantic trajectory database. In: 2015 IEEE 31st international conference on data engineering (ICDE), pp 975–986Google Scholar
  21. 21.
    Ruiz-Sarmiento JR, Galindo C, Gonzalez-Jimenez J (2016) Building multiversal semantic maps for mobile robot operation. Knowl-Based Syst, 119Google Scholar
  22. 22.
    Hossain MJ, Dewan MAA, Chae O (2012) A flexible edge matching technique for object detection in dynamic environment. Appl Intell 36(3):638–648CrossRefGoogle Scholar
  23. 23.
    Castellano G, Fanelli AM, Sforza G, et al (2016) Shape annotation for intelligent image retrieval. Appl Intell 44(1):179–195CrossRefGoogle Scholar
  24. 24.
    Supratid S, Kim H (2009) Modified fuzzy ants clustering approach[J]. Appl Intell 31(2):122–134CrossRefGoogle Scholar
  25. 25.
    Rotella F, Leuzzi F, Ferilli S (2015) Learning and exploiting concept networks with ConNeKTion. Appl Intell 42(1):87–111CrossRefGoogle Scholar
  26. 26.
    Kehoe B, Matsukawa A, Candido S et al (2013) Cloud-based robot grasping with the google object recognition engine. In: Proceedings of IEEE international conference on robotics and automation, pp 4263–4270Google Scholar
  27. 27.
    Furler L, Nagrath V, Malik AS, et al (2013) An auto-operated telepresence system for the nao humanoid robot. In: 2013 International conference on communication systems and network technologies (CSNT), pp 262–267Google Scholar
  28. 28.
    Saxena A, Jain A, Sener O, Jami A, Misra DK, Koppula HS (2014) Robobrain: large-scale knowledge engine for robots. Computer ScienceGoogle Scholar
  29. 29.
    Zweigle O, D’Andrea R (2009) RoboEarth: connecting robots worldwide. In: International conference on interaction sciences: information technology, culture and human. ACM, pp 184–191Google Scholar
  30. 30.
    Riazuelo L, Tenorth M, Marco DD, et al (2015) RoboEarth semantic mapping: a cloud enabled knowledge-based approach. IEEE Trans Autom Sci Eng 12(2):432–443CrossRefGoogle Scholar
  31. 31.
    Mueller JP (2006) Mining google web services: building applications with the Google API. WileyGoogle Scholar
  32. 32.
    Zhu F, Bosch M, Khanna N, et al (2015) Multiple hypotheses image segmentation and classification with application to dietary assessment. IEEE J Biomed Health Inf 19(1):377–388CrossRefGoogle Scholar
  33. 33.
    Wang AH, Wang QH, Li DH, et al (2009) Relationship between stereo depth and parallax image captured in stereoscopic display. Opt Precis Eng 17(2):433–438Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Control Science and EngineeringShandong UniversityJinanChina

Personalised recommendations