Advertisement

An Approximate Algorithm for Reverse Engineering of Multi-layer Perceptrons

  • Wojtek Kowalczyk
Conference paper

Abstract

We present an approximate algorithm for reconstructing internals of multi-layer perceptrons from membership queries. The key component of the algorithm is a procedure for reconstructing weights of a single linear threshold unit. We prove that the approximation error, measured as the distance between the original and the reconstructed weights, is dropping exponentially fast with the number of queries. The procedure is combined with a labelling strategy that involves solving multiple Linear Programming problems. This combination results in an algorithm that extracts internals of multi-layer per ceptrons: the number of units in the first hidden layer, their weights, and a boolean funct ion that is computed by the remaining nodes. In practice, networks that compute boolean combinations of 10–15 hyperplanes can be reconst ructe d in several minutes.

Keywords

Hide Layer Approximate Algorithm Labelling Function Labelling Procedure Boolean Combination 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    D. Angluin. Queries revisited. In T. Z. N. Abe, R. Khardon, editor, Algorithmic Learning Theory, 12th International Conference, ALT 2001, Washington, DC, USA, November 25-28, 2001, 12–31. Springer, 2001.Google Scholar
  2. [2]
    S. Argamon-Engelson and I. Dagan. Committee-based sample selection for probabilistic classifiers. Journal of Artificial Intelligence Research, 11:335–360, 1999.MATHGoogle Scholar
  3. [3]
    E. Baum. On learning a union of half spaces. Journal of Complexity, 6(1):67–101, 1990.MathSciNetMATHCrossRefGoogle Scholar
  4. [4]
    E. Baum. Neural net algorithms that learn in polynomial time from examples and queries. IEEE Transactions on Neural Networks, 2(1):5–19, 1991.CrossRefGoogle Scholar
  5. [5]
    C. Bishop. Neural Newtorks for Pattern Recognition. Oxford University Press, 1995.Google Scholar
  6. [6]
    R. Bramley and B. Winnicka. Solving linear inequalities in a least squares sense. SIAM Journal on Scientific Computing, 17(1):275–286, 1996.MathSciNetMATHCrossRefGoogle Scholar
  7. [7]
    P. Goldberg and S. Kwek. The precision of query points as a resource for learning convex polytopes with membership queries. In Proc. 13th Annu. Conference on Comput. Learning Theory, pages 225–235. Morgan Kaufmann, San Francisco, 2000.Google Scholar
  8. [8]
    M. Hasenjager and H. Ritter. Active learning in neural networks.Google Scholar
  9. [9]
    V. Shevchenko. On deciphering a threshold function of many-values logic. Gorkü State University, pages 155–166, 1987.Google Scholar
  10. [10]
    G. Strang. Linear algebra and its applications. Third Edition. Harcourt Brace Jovanovich, Publishers, 1988.Google Scholar
  11. [11]
    L.G. Valiant. A theory of the learnable. Communications ACM, vol 27,11, pp. 1134–1142, 1984.MATHCrossRefGoogle Scholar
  12. [12]
    Y. Zhang. User’s guide to LIPSOL, Rice University, 1998.Google Scholar

Copyright information

© Springer-Verlag London 2004

Authors and Affiliations

  • Wojtek Kowalczyk
    • 1
  1. 1.Department of Artificial IntelligenceFree University AmsterdamAmsterdamThe Netherlands

Personalised recommendations