Advertisement

EyeHope

(A Real Time Emotion Detection Application)
  • Zulfiqar A. MemonEmail author
  • Hammad Mubarak
  • Aamir Khimani
  • Mahzain Malik
  • Saman Karim
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 858)

Abstract

This paper describes about an android application named “EyeHope” whose sole purpose is to reduce the dependency level of the blind and visually impaired people. This application would help them decrease the communication gap by allowing them to perceive and identify the expressions of the person they are communicating with, whether the person is speaking or just listening to them. This would be an android based application. This paper also states about the real-time system communication with blind and visually impaired people using frames captured by mobile camera which would then be processed. Processing includes face detection and emotion detection using OpenCV. The output would be the emotion of the person whose image had been processed. This output would be converted from text to speech so that the blind person could listen to it through earphones connected to his/her phones. Its implementation and future enhancements will definitely be going to improve the life style of blind or visually impaired people by allowing them to effectively communicate with people around them.

Keywords

Emotion detection Computer vision Image processing Assisting blind people 

References

  1. 1.
    Chillaron, M., et al.: Face detection and recognition application for Android (2015). https://www.researchgate.net/publication/284253914_Face_detection_and_recognition_application_for_Android
  2. 2.
    Bourlard, H., Chen, D., Odobez, J.-M.: Text detection and recognition in images and video frames. Pattern Recognit. 37, 595–608 (2004)Google Scholar
  3. 3.
    Krishna, Sh., et al.: A wearable face recognition system for individuals with visual impairments, pp. 106–113 (2005)Google Scholar
  4. 4.
    Viola, X., Jones, M.: Robust real-time face detection. Int. J. Comput. Vis. 57, 137–154 (2004)CrossRefGoogle Scholar
  5. 5.
    Yuille, A.L., Chen, X.: A Time-Effect Cascade for Real-Time Object Detection: With Applications for the Visually Impaired (2005)Google Scholar
  6. 6.
    Konar, A., Chakraborty, A.: Emotion Recognition: A Pattern Analysis Approach (2014)Google Scholar
  7. 7.
    Cosgrove, C., Li, K., Lin, R., Nadkarni, S., Vijapur, S., Wong, P., Yang, Y., Yuan, K., Zheng, D.: Developing an Image Recognition Algorithm for Facial and Digit Identification (1999)Google Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS, vol. 1, p. 4 (2012)Google Scholar
  9. 9.
    Yao, A., Shao, N.M., Chen, Y.: Capturing AU-aware facial features and their latent relations for emotion recognition in the wild. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI 2015, New York, NY, USA, pp. 451–458. ACM (2015)Google Scholar
  10. 10.
    Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27(6), 803–816 (2009)CrossRefGoogle Scholar
  11. 11.
    Kahou, S., Michalski, V., Konda, K., Memisevic, R., Pal, C.: Recurrent neural networks for emotion recognition in video. In: ICMI, pp. 467–474 (2015)Google Scholar
  12. 12.
    Yu, Z., Zhang, C.: Image based static facial expression recognition with multiple deep network learning. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, ICMI 2015, New York, NY, USA, pp. 435–442. ACM (2015)Google Scholar
  13. 13.
    Kim, B., Roh, J., Dong, S., Lee, S.: Hierarchical committee of deep convolutional neural networks for robust facial expression recognition. J. Multimodal User Interfaces 10, 1–17 (2016)CrossRefGoogle Scholar
  14. 14.
    Visa, S., Ramsay, B., Ralescu, A., van der Knaap, E.: Confusion matrix-based feature selection (2011)Google Scholar
  15. 15.
    Dursun, P., Emül, M., Gençöz, F.: A review of the literature on emotional facial expression and its nature (2010)Google Scholar
  16. 16.
    Raval, D., Sakle, M.: A literature review on emotion recognition system using various facial expression (2017)Google Scholar
  17. 17.
    Sagarika, S.S., Maben, P.: Laser face recognition and facial expression identification using PCA. IEEE (2014)Google Scholar
  18. 18.
    Happy, S.L., George, A., Routray, A.: A real time facial expression classification system using local binary patterns. IEEE (2012)Google Scholar
  19. 19.
    Bhadu, A., Tokas, R., Kumar, V.: Facial expression recognition using DCT, gabor and wavelet feature extraction techniques. Int. J. Eng. Innov. Technol. 2(1), 92 (2012)Google Scholar
  20. 20.
    Bele, P.S., Mohod, P.S.: A literature review on facial and expression recognition with advanced image descriptor template (2015)Google Scholar
  21. 21.
    Suk, M., Prabhakaran, B.: Real-time Mobile Facial Expression Recognition System – A Case Study. Department of Computer Engineering, The University of Texas at Dallas, Richardson (2014)Google Scholar
  22. 22.
    Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto (1978)Google Scholar
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
    Lyons, M.J., Akemastu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: 3rd IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205 (1998)Google Scholar
  28. 28.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000)Google Scholar
  29. 29.
    Bosse, T., Duell, R., Memon, Z.A., Treur, J., van der Wal, C.N.: Computational model-based design of leadership support based on situational leadership theory. Simul. Trans. Soc. Model. Simul. Int. 93(7), 605–617 (2017)Google Scholar
  30. 30.
    Bosse, T., Duell, R., Memon, Z.A., Treur, J., van der Wal, C.N.: Agent-based modelling of emotion contagion in groups. Cognit. Comput. J. 7(1), 111–136 (2015)CrossRefGoogle Scholar
  31. 31.
    Hoogendoorn, M., Klein, M.C.A., Memon, Z.A., Treur, J.: Formal specification and analysis of intelligent agents for model-based medicine usage management. Comput. Biol. Med. 43(5), 444–457 (2013)CrossRefGoogle Scholar
  32. 32.
    Bosse, T., Memon, Z.A., Treur, J.: A cognitive and neural model for adaptive emotion reading by mirroring preparation states and hebbian learning. Cognit. Syst. Res. J. 12(1), 39–58 (2012)Google Scholar
  33. 33.
    Bosse, T., Hoogendoorn, M., Memon, Z.A., Treur, J., Umair, M.: A computational model for dynamics of desiring and feeling. Cognit. Syst. Res. J. 19, 39–61 (2012)Google Scholar
  34. 34.
    Bosse, T., Memon, Z.A., Treur, J.: A recursive BDI-agent model for theory of mind and its applications. Appl. Artif. Intell. J. 25(1), 1–44 (2011)CrossRefGoogle Scholar
  35. 35.
    Bosse, T., Memon, Z.A., Oorburg, R., Treur, J., Umair, M., de Vos, M.: A software environment for an adaptive human-aware software agent supporting attention-demanding tasks. Int. J. Artif. Intell. Tools 20(5), 819–846 (2011)CrossRefGoogle Scholar
  36. 36.
    Memon, Z.A.: Designing human-awareness for ambient agents: a human mindreading perspective. J. Ambient Intell. Smart Environ. 2(4), 439–440 (2010)Google Scholar
  37. 37.
    Memon, Z.A., Treur, J.: On the reciprocal interaction between believing and feeling: an adaptive agent modelling perspective. Cognit. Neurodynam. J. 4(4), 377–394 (2010)CrossRefGoogle Scholar
  38. 38.
    Kashif¸ U.A., Memon, Z.A., et al.: Architectural design of trusted platform for IaaS cloud computing. Int. J. Cloud Appl. Comput. (IJCAC) 8(2), 47 (2018)CrossRefGoogle Scholar
  39. 39.
    Laeeq, K., Memon, Z.A., Memon, J.: The SNS-based e-learning model to provide smart solution for e-learning. Int. J. Educ. Res. Innov. (IJERI) 10, 141 (2018)Google Scholar
  40. 40.
    Samad, F., Memon, Z.A., et al.: The future of internet: IPv6 fulfilling the routing needs in Internet of Things. Int. J. Future Gener. Commun. Netw. (IJFGCN) 11(1) (2018, in Press)CrossRefGoogle Scholar
  41. 41.
    Memon, Z.A., Samad, F., et al.: CPU-GPU processing. Int. J. Comput. Sci. Netw. Secur. 17(9), 188–193 (2017)Google Scholar
  42. 42.
    Samad, F., Memon, Z.A.: A new design of in-memory file system based on file virtual address framework. Int. J. Adv. Comput. Sci. Appl. 8(9), 233–237 (2017)Google Scholar
  43. 43.
    Memon, Z.A., Ahmed, J., Siddiqi, J.A.: CloneCloud in mobile cloud computing. Int. J. Comput. Sci. Netw. Secur. 17(8), 28–34 (2017)Google Scholar
  44. 44.
    Abbasi, A., Memon, Z.A., Jamshed, M., Syed, T.Q., Rabah, A.: Addressing the future data management challenges in IOT: a proposed framework. Int. J. Adv. Comput. Sci. Appl. 8(5), 197–207 (2017)Google Scholar
  45. 45.
    Waheed-ur-Rehman, Laghari, A., Memon, Z.: Exploiting smart phone accelerometer as a personal identification mechanism. Mehran Univ. Res. J. Eng. Technol. 34(S1), August 2015, pp. 21–26 (2015)Google Scholar
  46. 46.
    Memon, Z.A., Treur, J.: An agent model for cognitive and affective empathic understanding of other agents. Trans. Comput. Collect. Intell. (TCCI) 6, 56–83 (2012)Google Scholar
  47. 47.
    Duell, R., Memon, Z.A., Treur, J., van der Wal, C.N.: Ambient support for group emotion: an agent-based model. In: Agents and Ambient Intelligence, Ambient Intelligence and Smart Environments, vol. 12, pp. 261–287. IOS Press (2012)Google Scholar
  48. 48.
    Hoogendoorn, M., Memon, Z.A., Treur, J., Umair, M.: A model-based ambient agent providing support in handling desire and temptation. Adv. Intell. Soft Comput. 71, 461–475 (2010)Google Scholar
  49. 49.
    Batra, R., Memon, Z.A.: Effect of icon concreteness, semantic distance and familiarity on recognition level of mobile phone icons among e-literate and non e-literates. Int. J. Web Appl. 8(2), 55–64 (2016)Google Scholar
  50. 50.
    Siddiqi, S.S., Memon, Z.A.: Internet addiction impacts on time management that results in poor academic performance. In: Proceedings of the 14th International Conference on Frontiers of Information Technology, pp. 63–68. IEEE Computer Society (2016)Google Scholar
  51. 51.
    Laghari, A., Waheed-ur-Rehman, Memon, Z.A.: Biometric authentication technique using smartphone sensor. In: Proceedings of the 13th International Conference on Applied Sciences & Technology, IBCAST 2016, pp. 381–384 (2016)Google Scholar
  52. 52.
    Kashif, U.A., Memon, Z.A., Balouch, A.R., Chandio, J.A.: Distributed trust protocol for IaaS cloud computing. In: Proceedings of the 12th International Conference on Applied Sciences & Technology, IBCAST 2015, pp. 275–279 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zulfiqar A. Memon
    • 1
    Email author
  • Hammad Mubarak
    • 1
  • Aamir Khimani
    • 1
  • Mahzain Malik
    • 1
  • Saman Karim
    • 1
  1. 1.Department of Computer ScienceNational University of Computer and Emerging Sciences (NUCES-FAST)KarachiPakistan

Personalised recommendations