Abstract
We propose a novel approach for quantizing the weights of a multi-layer perceptron (MLP) for efficient VLSI implementation. Our approach uses soft weight sharing, previously proposed for improved generalization and considers the weights not as constant numbers but as random variables drawn from a Gaussian mixture distribution; which includes as its special cases k-means clustering and uniform quantization. This approach couples the training of weights for reduced error with their quantization. Simulations on synthetic and real regression and classification data sets compare various quantization schemes and demonstrate the advantage of the coupled training of distribution parameters.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Gersho, A. and R. Gray, Vector Quantization and Signal Compression Norwell, MA: Kluwer, 1992.
Choi, J.Y. and C.H. Choi, “Sensitivity Analysis of Multilayer Perceptron with Differentiable Activation Functions,” IEEE Transactions on Neural Networks, Vol. 3, pp. 101–107, 1992.
Xie, Y. and M.A. Jabri, “Analysis of the Effects of Quantization in Multi-Layer Neural Networks Using a Statistical Model,” IEEE Transactions on Neural Networks, Vol. 3, pp. 334–338, 1992.
Skaue, S., T. Kohda, H. Yamamato, S. Maruno, and Y. Shimeki, “Reduction of Required Precision Bits for Back Propagation Applied to Pattern Recognition,” IEEE Transactions on Neural Networks, Vol. 4, pp. 270–275, 1993.
Dündar, G. and K. Rose, “The Effects of Quantization on Multi Layer Neural Networks,” IEEE Transactions on Neural Networks, Vol. 6, pp. 1446–1451, 1995.
Anguita, D., S. Ridella and S. Rovetta, “Worst Case Analysis of Weight Inaccuracy Effects in Multilayer Perceptrons,” IEEE Transactions on Neural Networks, Vol. 10, pp. 415–418, 1999.
Nowlan, S. J. and G. E. Hinton, “Simplifying Neural Networks by Soft Weight Sharing,” Neural Computation, Vol. 4, pp. 473–493, 1992.
Alpaydin, E. “Soft Vector Quantization and the EM Algorithm,” Neural Networks, Vol. 11, pp. 467–477, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Köksal, F., Alpaydyn, E., Dündar, G. (2001). Weight Quantization for Multi-layer Perceptrons Using Soft Weight Sharing. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_30
Download citation
DOI: https://doi.org/10.1007/3-540-44668-0_30
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42486-4
Online ISBN: 978-3-540-44668-2
eBook Packages: Springer Book Archive