Skip to main content

Context Aware Self Learning Voice Assistant for Smart Navigation with Contextual LSTM

  • Conference paper
  • First Online:
Advanced Informatics for Computing Research (ICAICR 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1075))

Abstract

The gift of vision for humans is a valuable blessing but regrettably there are around 37 million people who are visually impaired. Among them, 15 million people are from India. They undergo numerous challenges in their daily lives. They are always dependent on others for traveling to different places. It is noted that context-awareness has a key responsibility in the lives of the visually impaired. There are many mobile applications contributing to ease them, but due to dependence on many additional resources, it has become a nightmare. To sophisticate the above challenge, the proposed mobile-cloud context-aware application will act as a voice chat-bot that provides context-aware travel assistance to the visually challenged people which is implemented in specific public environments. It is an interactive application and provides them with a help desk where they can query their necessary information through speech interface. This application relies on the Location based services including providers and Geo-coordinates for manipulating the latitude and longitude of places. The present location of the user is tracked by using location services. The distance from the user’s exact location to the destination is pre-determined and this application will assist them with the route to travel through audible directions. This would completely assist them with the travel by replying to the queries asked by them and it helps them to travel independently. The application flow initially takes the voice instruction and converts that into the text instructions. The contextual LSTM (Long-short Term Memory) in the application takes care of the conversational strategy, analyzes it and advocates all the users with answers for whatever questions are been posted. It also drives the visually handicapped to destination by identifying the obstacle and detection of the object in the way. The application uses the computational resources from cloud servers such as location specific resources and in turn pushes all the data in cloud server for reference and future usage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bansode, M., Jadhav, S., Kashyap, A.: Voice recognition and voice navigation for blind using GPS. Int. J. Innov. Res. Electr. Electron. Instrum. Control Eng. 3(4), 91–94 (2015)

    Google Scholar 

  2. Sanchez, J., Espinoza, M., de Borba Campos, M., Merabet, L.B.: Accessibility for people who are blind in public transportation systems. In: Human Interfaces for Civic and Urban Engagement, pp. 8–12 (2013)

    Google Scholar 

  3. Guentert, M.: Improving public transit accessibility for blind riders: a train station navigation assistant. In: ASSETS 2011 (2011)

    Google Scholar 

  4. Rituerto, A., Fusco, G., Coughlan, J.M.: Sign based indoor navigation system for people with visual impairments. In: ASSETS 2016 (2016)

    Google Scholar 

  5. Gawari, H.: Voice and GPS based navigation system for visually impaired. Int. J. Eng. Res. Appl. 4, 48–51 (2014)

    Google Scholar 

  6. Sánchez, J., Oyarzún, C.: Mobile audio assistance in bus transportation for the blind. Int. J. Disabil. Human Dev. 10, 365–371 (2011)

    Google Scholar 

  7. Chumkamon, S., Tuvaphanthaphiphat, P., Keeratiwintakorn, P.: A blind navigation system using RFID for indoor environments. In: Proceedings of ECTI-CON, pp. 765–768 (2008)

    Google Scholar 

  8. Cha, J.S., Lim, D.K., Shin, Y.-N.: Design and implementation of a voice based navigation for visually impaired persons. Int. J. Bio-Sci. Bio-Technol. 5(3), 61–68 (2013)

    Google Scholar 

  9. Gulati, R.: GPS based voice alert system for the blind. Int. J. Sci. Eng. Res. 2(1), 1–5 (2011)

    Google Scholar 

  10. Angin, P., Bhargava, B.: Real-time mobile cloud computing for context-aware blind navigation. Int. J. Next-Gener. Comput. 2(2), 405–414 (2011)

    Google Scholar 

  11. Azzouni, A., Pujolle, G.: NeuTM: a neural network-based framework for traffic matrix prediction in SDN. In: NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium (2018)

    Google Scholar 

  12. Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: INTERSPEECH (2014)

    Google Scholar 

  13. Uma Nandhini, D., Tamilselvan, L., UdhayaKumar, S., Silviya Nancy, J.: Client aware opportunistic framework of rich mobile applications in mobile cloud environment. Int. J. u- e- Serv. Sci. Technol. 10(1), 281–288 (2017)

    Article  Google Scholar 

  14. Poovam Raj, T.T., UdhayaKumar, S., Silviya Nancy, J.: Smart city based mobile application for seamless communication of differently-abled. Int. J. Multimedia Ubiquit. Eng. 12(1), 177–190 (2017)

    Article  Google Scholar 

  15. Nandhini, U., TamilSelvan, L.: Computational Analytics of Client Awareness for Mobile Application Offloading with Cloud Migration. KSII Trans. Internet Inf. Syst. 8(11), 3916–3936 (2014). https://doi.org/10.3837/tiis.2014.11.014

    Article  Google Scholar 

  16. Avanthika, U., Sundar, S., Nancy, S.: An interactive mobile application for the visually imparied to have access to listening audio books with handy books portal. Int. J. Interact. Mobile Technol. 9(1), 64–66

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to J. Silviya Nancy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Silviya Nancy, J., Udhayakumar, S., Pavithra, J., Preethi, R., Revathy, G. (2019). Context Aware Self Learning Voice Assistant for Smart Navigation with Contextual LSTM. In: Luhach, A., Jat, D., Hawari, K., Gao, XZ., Lingras, P. (eds) Advanced Informatics for Computing Research. ICAICR 2019. Communications in Computer and Information Science, vol 1075. Springer, Singapore. https://doi.org/10.1007/978-981-15-0108-1_41

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-0108-1_41

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-0107-4

  • Online ISBN: 978-981-15-0108-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics