Advertisement

Computationally Efficient ANN Model for Small-Scale Problems

  • Shikhar Sharma
  • Shiv Naresh Shivhare
  • Navjot Singh
  • Krishan Kumar
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 748)

Abstract

In this current age of digital photography, the digital information is expanding exponentially. The use of such information in fields like research, automation, etc. has experienced a rise over the last decade. Also, employing machines to automate any task has been performed since forever. This leads to extensive use of the machine in solving the task of understanding the digital information called computer vision. Machine learning has always played an eminent role in various computer vision challenges. But, with the emergence of deep learning, machines are now outperforming humans. This has led to exaggerate the use of such deep learning techniques like convolutional neural network (CNN) in almost every machine vision task. In this paper, a new technique is proposed that could be used in place of CNN for solving elementary computer vision problems. The work uses the ability of the spatial transformer networks (STN) to effectively extract the spatial information from an input. Such an information is invariant and could be used as input to more plain neural networks like artificial neural network (ANN) without performance being compromised.

Keywords

Spatial transformer networks Artificial neural networks CNN Deep learning 

References

  1. 1.
    Vapnik, V., et al.: Support vector machine. Mach. Learn. 20(3), 273–297 (1995)MATHGoogle Scholar
  2. 2.
    Lecun, Y., Galland, C.C., Hinton, G.E.: GEMINI: Gradient estimation through matrix inversion after noise injection. In: NIPS, pp. 141–148 (1988)Google Scholar
  3. 3.
    Palanisamy, P., et al.: Prediction of tool wear using regression and ANN models in end-milling operation. Int. J. Adv. Manuf. Technol. 37(1), 29–41 (2008)CrossRefGoogle Scholar
  4. 4.
    Shirsath, et al.: A comparative study of daily pan evaporation estimation using ANN, regression and climate based models. Water Res. Manag. 24(8), 1571–1581 (2010)Google Scholar
  5. 5.
    Khan, Javed, et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 7(6), 673 (2001)Google Scholar
  6. 6.
    Louis, David, N., et al.: The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathol. 114(2), 97–109 (2007)Google Scholar
  7. 7.
    Al-Shoshan, et al.: Handwritten signature verification using image invariants and dynamic features. In: International Conference on Computer Graphics, Imaging and Visualisation. IEEE (2006)Google Scholar
  8. 8.
    Parra, et al.: Automated brain data segmentation and pattern recognition using ANN. In: Computational Intelligence, Robotics and Autonomous Systems (2003)Google Scholar
  9. 9.
    Szegedy, C., et al.: Going Deeper with Convolutions. CVPR (2015)Google Scholar
  10. 10.
    Schroff, F., Kalenichenko, D., Philbin, J., et al.: Facenet: A unified embedding for face recognition and clustering (2015). arXiv:1503.03832
  11. 11.
    Long, J., et al.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)Google Scholar
  12. 12.
    Kumar, K., et al.: F-DES: fast and deep event summarization. IEEE TMM.  https://doi.org/10.1109/TMM.2017.2741423
  13. 13.
    Jaderberg, M., et al.: Synthetic data and artificial neural networks for natural scene text recognition. NIPS DLW (2014)Google Scholar
  14. 14.
    Gkioxari, G., et al.: Contextual action recognition with r-cnn (2015). arXiv:1505.01197
  15. 15.
    Simonyan, K., et al.: Very deep convolutional networks for large-scale image recognition. ICLR (2015)Google Scholar
  16. 16.
    Karen, et al.: Two-stream convolutional networks for action recognition in videos. In: NIPS, pp. 568–576 (2014)Google Scholar
  17. 17.
    Tompson, J.J., et al.: Joint training of a convolutional network and a graphical model for human pose estimation. In: NIPS, pp. 1799–1807 (2014)Google Scholar
  18. 18.
    Jaderberg, M., et al.: Spatial transformer networks. Adv. Neural Inf. Process. Syst. (2015)Google Scholar
  19. 19.
    Hinton, G., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2005)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Lecun, Y., Bengio, Y., Lhinton, G.: Deep learning. Nature 521, 436–444 (2015)CrossRefGoogle Scholar
  22. 22.
    Zeiler, M.D., Ranzato, M., Monga, R., Mao, M., Yang, K., Le, Q.V., Nguyen, P., Senior, A., Vanhoucke, V., Dean, J., Hinton, G.E.: On Rectified Linear Units for Speech Processing. Proc. ICASSP (2013)Google Scholar
  23. 23.
    Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetMATHGoogle Scholar
  24. 24.
    Kingma, D., Jimmy Ba, : Adam: A method for stochastic optimization (2014). arXiv:1412.6980

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Shikhar Sharma
    • 1
  • Shiv Naresh Shivhare
    • 1
  • Navjot Singh
    • 1
  • Krishan Kumar
    • 1
  1. 1.Department of Computer Science and EngineeringNational Institute of Technology, UttarakhandSrinagar (Garhwal)India

Personalised recommendations