Advertisement

Image Classification in the Frequency Domain with Neural Networks and Absolute Value DCT

  • Florian FranzenEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10884)

Abstract

In this work we explain, how to classify images with neural networks purely in the frequency domain. This is successful by the help of the discrete cosine transform (DCT) in which the values are turned to absolute values. After explaining the method and network architecture we test with a standard dataset for hand written digit recognition and reach the accuracy of 0.9805 in the frequency domain. By superposition of the DCTs we reveal the patterns which are learned by the Network. Afterwards we show some experiments with real images, where the classification in the frequency domain excels the results reached with the same network configuration in the spatial domain.

Keywords

Computer vision Fourier domain Neural networks 

Notes

Acknowledgments

The author wishes to thank Prof. Dr. Chunrong Yuan for her support.

References

  1. 1.
    Fukushima, K., Miyake, S., Ito, T.: Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Trans. Syst. Cybern. 13, 826–834 (1983)CrossRefGoogle Scholar
  2. 2.
    LeCun, Y., Bouttou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  3. 3.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  4. 4.
    Frénay, B., Dumas, B.: Information visualisation and machine learning: characteristics, convergence and perspective. In: Proceedings of the 16th European Symposium on Artificial Neural Networks (ESANN 2016), Bruges, pp. 623–628 (2016)Google Scholar
  5. 5.
    Bruna, J., Mallat, S.: Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1872–1886 (2013)CrossRefGoogle Scholar
  6. 6.
    Rippel, O., Snoek, J., Adams, R.P.: Spectral representations for convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 2449–2457, arXiv:1506.03767R (2015)
  7. 7.
    Yuan, C.: Artificial Neural Networks for Object Recognition and Localization. Fraunhofer Series in Information and Communication Technology (2014)Google Scholar
  8. 8.
    Mathieu, M., Henaff, M., LeCun, Y.: Fast training of convolutional networks through FFTs, arXiv: 1312.5851v5 (2014)
  9. 9.
    Chen, W., Wilson, J. T., Tyree, S., Weinberger, K. Q., Chen, Y.: Compressing convolutional neural networks, arXiv:1506.04449 (2015)
  10. 10.
    Krizhevsky, A.: Learning multiple layers of features from tiny images. Master thesis, Department of computer Science, University of Toronto (2009)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of Applied SciencesCologneGermany

Personalised recommendations