Skip to main content

Computationally Efficient ANN Model for Small-Scale Problems

  • Conference paper
  • First Online:
Machine Intelligence and Signal Analysis

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 748))

Abstract

In this current age of digital photography, the digital information is expanding exponentially. The use of such information in fields like research, automation, etc. has experienced a rise over the last decade. Also, employing machines to automate any task has been performed since forever. This leads to extensive use of the machine in solving the task of understanding the digital information called computer vision. Machine learning has always played an eminent role in various computer vision challenges. But, with the emergence of deep learning, machines are now outperforming humans. This has led to exaggerate the use of such deep learning techniques like convolutional neural network (CNN) in almost every machine vision task. In this paper, a new technique is proposed that could be used in place of CNN for solving elementary computer vision problems. The work uses the ability of the spatial transformer networks (STN) to effectively extract the spatial information from an input. Such an information is invariant and could be used as input to more plain neural networks like artificial neural network (ANN) without performance being compromised.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://yann.lecun.com/exdb/mnist/.

  2. 2.

    https://www.cs.toronto.edu/~kriz/cifar.html.

  3. 3.

    http://empslocal.ex.ac.uk/people/staff/np331/index.php?section=FingerSpellingDataset.

  4. 4.

    https://www.cs.toronto.edu/~kriz/cifar.html.

  5. 5.

    http://grail.cs.washington.edu/projects/deepexpr/ferg-db.html.

  6. 6.

    http://www.anefian.com/research/.

References

  1. Vapnik, V., et al.: Support vector machine. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  2. Lecun, Y., Galland, C.C., Hinton, G.E.: GEMINI: Gradient estimation through matrix inversion after noise injection. In: NIPS, pp. 141–148 (1988)

    Google Scholar 

  3. Palanisamy, P., et al.: Prediction of tool wear using regression and ANN models in end-milling operation. Int. J. Adv. Manuf. Technol. 37(1), 29–41 (2008)

    Article  Google Scholar 

  4. Shirsath, et al.: A comparative study of daily pan evaporation estimation using ANN, regression and climate based models. Water Res. Manag. 24(8), 1571–1581 (2010)

    Google Scholar 

  5. Khan, Javed, et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 7(6), 673 (2001)

    Google Scholar 

  6. Louis, David, N., et al.: The 2007 WHO classification of tumours of the central nervous system. Acta Neuropathol. 114(2), 97–109 (2007)

    Google Scholar 

  7. Al-Shoshan, et al.: Handwritten signature verification using image invariants and dynamic features. In: International Conference on Computer Graphics, Imaging and Visualisation. IEEE (2006)

    Google Scholar 

  8. Parra, et al.: Automated brain data segmentation and pattern recognition using ANN. In: Computational Intelligence, Robotics and Autonomous Systems (2003)

    Google Scholar 

  9. Szegedy, C., et al.: Going Deeper with Convolutions. CVPR (2015)

    Google Scholar 

  10. Schroff, F., Kalenichenko, D., Philbin, J., et al.: Facenet: A unified embedding for face recognition and clustering (2015). arXiv:1503.03832

  11. Long, J., et al.: Fully convolutional networks for semantic segmentation. In: CVPR (2015)

    Google Scholar 

  12. Kumar, K., et al.: F-DES: fast and deep event summarization. IEEE TMM. https://doi.org/10.1109/TMM.2017.2741423

  13. Jaderberg, M., et al.: Synthetic data and artificial neural networks for natural scene text recognition. NIPS DLW (2014)

    Google Scholar 

  14. Gkioxari, G., et al.: Contextual action recognition with r-cnn (2015). arXiv:1505.01197

  15. Simonyan, K., et al.: Very deep convolutional networks for large-scale image recognition. ICLR (2015)

    Google Scholar 

  16. Karen, et al.: Two-stream convolutional networks for action recognition in videos. In: NIPS, pp. 568–576 (2014)

    Google Scholar 

  17. Tompson, J.J., et al.: Joint training of a convolutional network and a graphical model for human pose estimation. In: NIPS, pp. 1799–1807 (2014)

    Google Scholar 

  18. Jaderberg, M., et al.: Spatial transformer networks. Adv. Neural Inf. Process. Syst. (2015)

    Google Scholar 

  19. Hinton, G., Osindero, S., Teh, Y.: A fast learning algorithm for deep belief nets. Neural Comput. 18, 1527–1554 (2005)

    Article  MathSciNet  Google Scholar 

  20. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313, 504–507 (2006)

    Article  MathSciNet  Google Scholar 

  21. Lecun, Y., Bengio, Y., Lhinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  22. Zeiler, M.D., Ranzato, M., Monga, R., Mao, M., Yang, K., Le, Q.V., Nguyen, P., Senior, A., Vanhoucke, V., Dean, J., Hinton, G.E.: On Rectified Linear Units for Speech Processing. Proc. ICASSP (2013)

    Google Scholar 

  23. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  24. Kingma, D., Jimmy Ba, : Adam: A method for stochastic optimization (2014). arXiv:1412.6980

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shikhar Sharma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sharma, S., Shivhare, S.N., Singh, N., Kumar, K. (2019). Computationally Efficient ANN Model for Small-Scale Problems. In: Tanveer, M., Pachori, R. (eds) Machine Intelligence and Signal Analysis. Advances in Intelligent Systems and Computing, vol 748. Springer, Singapore. https://doi.org/10.1007/978-981-13-0923-6_37

Download citation

Publish with us

Policies and ethics