Skip to main content

Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2130))

Abstract

We propose a novel approach for quantizing the weights of a multi-layer perceptron (MLP) for efficient VLSI implementation. Our approach uses soft weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform quantization. This approach couples the training of weights for reduced error with their quantization. Simulations on synthetic and real regression and classification data sets compare various quantization schemes and demonstrate the advantage of the coupled training of distribution parameters.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   189.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Gersho, A. and R. Gray, Vector Quantization and Signal Compression Norwell, MA: Kluwer, 1992.

    Google Scholar 

  2. Choi, J.Y. and C.H. Choi, “Sensitivity Analysis of Multilayer Perceptron with Differentiable Activation Functions,” IEEE Transactions on Neural Networks, Vol. 3, pp. 101–107, 1992.

    Article  Google Scholar 

  3. Xie, Y. and M.A. Jabri, “Analysis of the Effects of Quantization in Multi-Layer Neural Networks Using a Statistical Model,” IEEE Transactions on Neural Networks, Vol. 3, pp. 334–338, 1992.

    Article  Google Scholar 

  4. Skaue, S., T. Kohda, H. Yamamato, S. Maruno, and Y. Shimeki, “Reduction of Required Precision Bits for Back Propagation Applied to Pattern Recognition,” IEEE Transactions on Neural Networks, Vol. 4, pp. 270–275, 1993.

    Article  Google Scholar 

  5. Dündar, G. and K. Rose, “The Effects of Quantization on Multi Layer Neural Networks,” IEEE Transactions on Neural Networks, Vol. 6, pp. 1446–1451, 1995.

    Article  Google Scholar 

  6. Anguita, D., S. Ridella and S. Rovetta, “Worst Case Analysis of Weight Inaccuracy Effects in Multilayer Perceptrons,” IEEE Transactions on Neural Networks, Vol. 10, pp. 415–418, 1999.

    Article  Google Scholar 

  7. Nowlan, S. J. and G. E. Hinton, “Simplifying Neural Networks by Soft Weight Sharing,” Neural Computation, Vol. 4, pp. 473–493, 1992.

    Article  Google Scholar 

  8. Alpaydin, E. “Soft Vector Quantization and the EM Algorithm,” Neural Networks, Vol. 11, pp. 467–477, 1998.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Köksal, F., Alpaydyn, E., Dündar, G. (2001). Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_30

Download citation

  • DOI: https://doi.org/10.1007/3-540-44668-0_30

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42486-4

  • Online ISBN: 978-3-540-44668-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics