Abstract
The gift of vision for humans is a valuable blessing but regrettably there are around 37 million people who are visually impaired. Among them, 15 million people are from India. They undergo numerous challenges in their daily lives. They are always dependent on others for traveling to different places. It is noted that context-awareness has a key responsibility in the lives of the visually impaired. There are many mobile applications contributing to ease them, but due to dependence on many additional resources, it has become a nightmare. To sophisticate the above challenge, the proposed mobile-cloud context-aware application will act as a voice chat-bot that provides context-aware travel assistance to the visually challenged people which is implemented in specific public environments. It is an interactive application and provides them with a help desk where they can query their necessary information through speech interface. This application relies on the Location based services including providers and Geo-coordinates for manipulating the latitude and longitude of places. The present location of the user is tracked by using location services. The distance from the user’s exact location to the destination is pre-determined and this application will assist them with the route to travel through audible directions. This would completely assist them with the travel by replying to the queries asked by them and it helps them to travel independently. The application flow initially takes the voice instruction and converts that into the text instructions. The contextual LSTM (Long-short Term Memory) in the application takes care of the conversational strategy, analyzes it and advocates all the users with answers for whatever questions are been posted. It also drives the visually handicapped to destination by identifying the obstacle and detection of the object in the way. The application uses the computational resources from cloud servers such as location specific resources and in turn pushes all the data in cloud server for reference and future usage.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bansode, M., Jadhav, S., Kashyap, A.: Voice recognition and voice navigation for blind using GPS. Int. J. Innov. Res. Electr. Electron. Instrum. Control Eng. 3(4), 91–94 (2015)
Sanchez, J., Espinoza, M., de Borba Campos, M., Merabet, L.B.: Accessibility for people who are blind in public transportation systems. In: Human Interfaces for Civic and Urban Engagement, pp. 8–12 (2013)
Guentert, M.: Improving public transit accessibility for blind riders: a train station navigation assistant. In: ASSETS 2011 (2011)
Rituerto, A., Fusco, G., Coughlan, J.M.: Sign based indoor navigation system for people with visual impairments. In: ASSETS 2016 (2016)
Gawari, H.: Voice and GPS based navigation system for visually impaired. Int. J. Eng. Res. Appl. 4, 48–51 (2014)
Sánchez, J., Oyarzún, C.: Mobile audio assistance in bus transportation for the blind. Int. J. Disabil. Human Dev. 10, 365–371 (2011)
Chumkamon, S., Tuvaphanthaphiphat, P., Keeratiwintakorn, P.: A blind navigation system using RFID for indoor environments. In: Proceedings of ECTI-CON, pp. 765–768 (2008)
Cha, J.S., Lim, D.K., Shin, Y.-N.: Design and implementation of a voice based navigation for visually impaired persons. Int. J. Bio-Sci. Bio-Technol. 5(3), 61–68 (2013)
Gulati, R.: GPS based voice alert system for the blind. Int. J. Sci. Eng. Res. 2(1), 1–5 (2011)
Angin, P., Bhargava, B.: Real-time mobile cloud computing for context-aware blind navigation. Int. J. Next-Gener. Comput. 2(2), 405–414 (2011)
Azzouni, A., Pujolle, G.: NeuTM: a neural network-based framework for traffic matrix prediction in SDN. In: NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium (2018)
Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: INTERSPEECH (2014)
Uma Nandhini, D., Tamilselvan, L., UdhayaKumar, S., Silviya Nancy, J.: Client aware opportunistic framework of rich mobile applications in mobile cloud environment. Int. J. u- e- Serv. Sci. Technol. 10(1), 281–288 (2017)
Poovam Raj, T.T., UdhayaKumar, S., Silviya Nancy, J.: Smart city based mobile application for seamless communication of differently-abled. Int. J. Multimedia Ubiquit. Eng. 12(1), 177–190 (2017)
Nandhini, U., TamilSelvan, L.: Computational Analytics of Client Awareness for Mobile Application Offloading with Cloud Migration. KSII Trans. Internet Inf. Syst. 8(11), 3916–3936 (2014). https://doi.org/10.3837/tiis.2014.11.014
Avanthika, U., Sundar, S., Nancy, S.: An interactive mobile application for the visually imparied to have access to listening audio books with handy books portal. Int. J. Interact. Mobile Technol. 9(1), 64–66
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Silviya Nancy, J., Udhayakumar, S., Pavithra, J., Preethi, R., Revathy, G. (2019). Context Aware Self Learning Voice Assistant for Smart Navigation with Contextual LSTM. In: Luhach, A., Jat, D., Hawari, K., Gao, XZ., Lingras, P. (eds) Advanced Informatics for Computing Research. ICAICR 2019. Communications in Computer and Information Science, vol 1075. Springer, Singapore. https://doi.org/10.1007/978-981-15-0108-1_41
Download citation
DOI: https://doi.org/10.1007/978-981-15-0108-1_41
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0107-4
Online ISBN: 978-981-15-0108-1
eBook Packages: Computer ScienceComputer Science (R0)