Image Compression Using Pixel Neural Networks

  • Wladyslaw Skarbek
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 42)


A subclass of recurrent neural networks, called Pixel Neural Networks (PNN) is defined. N = mn neural (pixel) elements are located inmxn array. Each pixel has few input links, e.g. less than 10~5iV. The limit state (if exists) defines a 2D pattern, i.e. an image. Several sufficient conditions for the deterministic and stochastic convergence of PNN network are given. For LOPNN — a special subclass of PNN, necessary and sufficient condition for the convergence is found. It appears that PNN networks can represent real-life images and fractal compression operators can be implemented by networks from the special subclass FBPNN. Therefore they can be used for still image compression. Moreover, FBPNN representation enables reducing of the decoding time and the space use for about 50%. Finally, PNN networks can be used as components of a high performance associative memory.


Face Recognition Discrete Cosine Transform Image Compression Associative Memory Recurrent Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amari S. (1972) Learning patterns and pattern sequences by self-organizing nets of threshold elements. IEEE Transactions on Computers, 21: 1197–1206.MathSciNetMATHCrossRefGoogle Scholar
  2. 2.
    Amari S. (1977) Neural theory of association and concept formation. Biological Cybernetics, 26: 175–185.MathSciNetMATHCrossRefGoogle Scholar
  3. 3.
    Banach S. (1922) Sur les operations dans les ensembles abstraits et leur appli­cations aux equations integrales. Pundamenta Mathematica, 3: 133–181.MATHGoogle Scholar
  4. 4.
    Barnsley M.F., Hurd L.P. (1993) Fractal image compression. AK Peters, Ltd. Wellesley, MA.MATHGoogle Scholar
  5. 5.
    Devroye L., Gyorfi L., Lugosi G. (1996) A Probabilistic Theory of Pattern Recognition. Springer, New York Berlin Heidelberg.MATHGoogle Scholar
  6. 6.
    Diamantaras K.I., Kung S.Y. (1996) Principal Component Neural Networks. John Wiley & Sons, New York.MATHGoogle Scholar
  7. 7.
    Dugundi J., Granas A. (1982) Fixed point theory. Polish Scientific Publishers, Warszawa.Google Scholar
  8. 8.
    Fisher Y. (ed.) (1995) Fractal image compression — Theory and Application. Springer, New York.Google Scholar
  9. 9.
    Hassoun M.H. (ed.) (1993) Associative Neural Memories: Theory and Imple­mentation. Oxford University Press, Oxford.Google Scholar
  10. 10.
    Jacquin A. (1992) Image coding based on a fractal theory of iterated contractive image transformations. IEEE Transactions on Image Processing, 1: 18–30.CrossRefGoogle Scholar
  11. 11.
    Jain A.K. (1989) Fundamentals of Digital Image Processing. Prentice-Hall. New Jersey.MATHGoogle Scholar
  12. 12.
    Haykin S. (1994) Neural Networks — A Comprehensive Foundation. Maxwell Macmillan International.MATHGoogle Scholar
  13. 13.
    Hotelling H. (1933) Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24: 417–441.CrossRefGoogle Scholar
  14. 14.
    Kohonen T. (1972) Correlation matrix memories. IEEE Transactions on Com­puters, 21: 353–359.MATHCrossRefGoogle Scholar
  15. 15.
    Kohonen T. (1988) Self-Organization and Associative Memory. Springer, New York.MATHCrossRefGoogle Scholar
  16. 16.
    Pennebaker W.B., Mitchell W.B. (1993) JPEG Still Image Data Compression Standard. Van Nostrand Reinhold, New York.Google Scholar
  17. 17.
    Moghaddam B., Pentland A. (1997) Probabilistic Visual Learning for Object Representation. IEEE Transactions on Pattern Analysis and Machine Intelli­gence, 7: 696–710.CrossRefGoogle Scholar
  18. 18.
    Oja E. (1983) Subspace Methods of Pattern Recognition. Research Studies Press.Google Scholar
  19. 19.
    Oja E. (1992) Principal components, minor components, and linear neural networks. Neural Networks, 5: 927–935.CrossRefGoogle Scholar
  20. 20.
    Skarbek W. (1995) On convergence of affine fractal operators. Image Processing and Communications, 1: 33–41.Google Scholar
  21. 21.
    Skarbek W., Cichocki A. (1996) Robust image association by recurrent neural subnetworks. Neural Processing Letters, 3: 131–138.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Wladyslaw Skarbek
    • 1
  1. 1.Multimedia Laboratory Faculty of Electronics and Information Technology WarsawUniversity of TechnologyWarszawaPoland

Personalised recommendations