Abstract
In this chapter we describe MUME, a MUlti-Module Environment for neural computing development and simulation. MUME provides an efficient, flexible and modular environment where multiple-net and multiple-algorithms can be used and combined with non-neural information processing systems. MUME supports dynamic (time-dependent) and static neural networks. It has an object oriented structure in which neural network classes can be easily added and in which the new classes can make use of MUME’s existing library.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
B. Flower and M. Jabri, “Summed Weight Neuron Perturbation: a 0(N) improvement over Weight Perturbation”, to appear in NIPS5, Morgan Kaufmann Publishers, 1993.
G. Cauwenberghs, “A fast stochastic error-descent algorithm for supervised learning and optimisation”, to appear in NIPS5, Morgan Kaufmann Publishers, 1993.
M. Jabri and X. Li, “Predicting the Number of Contacts and Dimensions of Full-custom Integrated Circuits Blocks Using Neural Networks Techniques”, IEEE Transactions on Neural Networks, Vol. 3, No 1, pp. 146–153, January, 1992.
M. Jabri and B. Flower, “Training Analog neural networks using weight perturbation”, IEEE Transactions on Neural Networks, Vol. 3, No 1, pp. 154–157, January, 1992.
R.A. Jacobs and M.I. Jordan, “Adaptive Mixtures of Local Experts”. Neural Computation, MIT Press, Vol. 3, No. 3, pp. 79–87, 1991.
J. Platt, “A Resource-allocating network for function interpolation”, Neural Computation, MIT Press, Vol. 3, No. 2, pp. 213–225, 1991.
A. Waibel, “Modular Construction of Time-Delay Neural Networks for Speech Recognition”. Neural Computation, MIT Press, Vol. 1, No. 1, pp. 39–36, 1989.
R.J. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Networks”. Neural Computation, MIT Press, Vol. 1, No. 2, pp. 270–280, 1989.
R. Williams and J. Peng, “An efficient gradient-based algorithm for online learning of recurrent network trajectories”, Neural Computation, MIT Press, Vol. 2, No. 4, pp. 490–501, 1990.
Y. Xie and M. Jabri, “On the Training of Limited Precision Multi-layer Perceptrons”. Proceedings of the International Joint Conference on Neural Networks, pp. III-942–947, July 1992, Baltimore, USA.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1994 Springer Science+Business Media New York
About this chapter
Cite this chapter
Jabri, M.A., Tinker, E.A., Leerink, L. (1994). MUME — A Multi-Net Multi-Architecture Neural Simulation Environment. In: Skrzypek, J. (eds) Neural Network Simulation Environments. The Kluwer International Series in Engineering and Computer Science, vol 254. Springer, Boston, MA. https://doi.org/10.1007/978-1-4615-2736-7_12
Download citation
DOI: https://doi.org/10.1007/978-1-4615-2736-7_12
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4613-6180-0
Online ISBN: 978-1-4615-2736-7
eBook Packages: Springer Book Archive