A Selective Attention Guided Initiative Semantic Cognition Algorithm for Service Robot
With the development of artificial intelligence and robotics, the study on service robot has made a significant progress in recent years. Service robot is required to perceive users and environment in unstructured domestic environment. Based on the perception, service robot should be capable of understanding the situation and discover service task. So robot can assist humans for home service or health care more accurately and with initiative. Human can focus on the salient things from the mass observation information. Humans are capable of utilizing semantic knowledge to make some plans based on their understanding of the environment. Through intelligent space platform, we are trying to apply this process to service robot. A selective attention guided initiatively semantic cognition algorithm in intelligent space is proposed in this paper. It is specifically designed to provide robots with the cognition needed for performing service tasks. At first, an attention selection model is built based on saliency computing and key area. The area which is highly relevant to service task could be located and referred as focus of attention (FOA). Second, a recognition algorithm for FOA is proposed based on a neural network. Some common objects and user behavior are recognized in this step. At last, a unified semantic knowledge base and corresponding reasoning engine is proposed using recognition result. Related experiments in a real life scenario demonstrated that our approach is able to mimic the recognition process in humans, make robots understand the environment and discover service task based on its own cognition. In this way, service robots can act smarter and achieve better service efficiency in their daily work.
KeywordsService robot cognition computing selective attention semantic knowledge base artificial neural network
Unable to display preview. Download preview PDF.
This work was supported by National Natural Science Foundation of China (Nos. 61773239, 91748115 and 61603213), Natural Science Foundation of Shandong Province (No. ZR2015FM007), and Taishan Scholars Program of Shandong Province.
- I. H. Suh, G. H. Lim, W. Hwang, H. Suh, J. H. Choi, Y. T. Park. Ontology-based multi-layered robot knowledge framework (OMRKF) for robot intelligence. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, USA, pp. 429–436, 2007. DOI: 10.1109/IROS.2007.4399082.Google Scholar
- K. Wongpatikaseree, M. Ikeda, M. Buranarach, T. Supnithi, A. O. Lim, Y. S. Tan. Activity recognition using context-aware infrastructure ontology in smart home domain. In Proceedings of the 7th International Conference on Knowledge, Information and Creativity Support Systems, IEEE, Melbourne, Australia, pp. 50–57, 2012. DOI: 10.1109/KICSS.2012.26.Google Scholar
- J. H. Lee, N. Ando, H. Hashimoto. Intelligent space for human and mobile robot. In Proceedings of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Atlanta, USA, pp. 784, 1999. DOI: 10.1109/AIM.1999.803269.Google Scholar
- K. Morioka, H. Hashimoto. Appearance based object identification for distributed vision sensors in intelligent space. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan, pp. 199–204, 2004. DOI: 10.1109/IROS.2004.1389352.Google Scholar
- P. Steinhaus, M. Strand, R. Dillmann. Autonomous robot navigation in human-centered environments based on 3D data fusion. Eurasip Journal on Advances in Signal Processing, vol. 2007, Article number 86831, 2007. DOI: 10.1155/2007/86831.Google Scholar
- H. Z. Chen, G. H. Tian, F. Lu, G. L. Liu. A hybrid cloud robot framework based on intelligent space. In Proceedings of the 12th World Congress on Intelligent Control and Automation, IEEE, Guilin, China, pp. 2996–3001, 2016. DOI: 10.1109/WCICA.2016.7578487.Google Scholar
- R. Zhao, W. L. Ouyang, H. S. Li, X. G. Wang. Saliency detection by multi-context deep learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Boston, USA, pp. 1265–1274, 2015. DOI: 10.1109/CVPR.2015.7298731.Google Scholar
- X. D. Hou, L. Q. Zhang. Saliency detection: A spectral residual approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, USA, 2007. DOI: 10.1109/CVPR.2007.383267.Google Scholar
- L. J. Wang, H. C. Lu, X. Ruan, M. H. Yang. Deep networks for saliency detection via local estimation and global search. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, pp. 3183–3192, 2015. DOI: 10.1109/CVPR.2015.7298938.Google Scholar
- T. S. Chen, L. Lin, L. B. Liu, X. N. Luo, X. L. Li. DISC: Deep image saliency computing via progressive representation learning. IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 6, pp. 1135–1149, 2016. DOI: 10.1109/TNNLS.2015.2506664.Google Scholar
- J. T. Pan, E. Sayrol, X. Giro-I-Nieto, K. McGuinness, N. E. O’Connor. Shallow and deep convolutional networks for saliency prediction. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, pp. 598–606, 2016. DOI: 10.1109/CVPR.2016.71.Google Scholar