Skip to main content

Learning Parameters of Linear Models in Compressed Parameter Space

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2012 (ICANN 2012)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7553))

Included in the following conference series:

  • 3208 Accesses

Abstract

We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.

This work was supported by the German Bundesministerium für Wirtschaft und Technologie (BMWi, grant FKZ 50 RA 1012 and grant FKZ 50 RA 1011).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Calderbank, R., Jafarpour, S., Schapire, R.: Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. Technical report (2009)

    Google Scholar 

  2. Davenport, M.A., Duarte, M.F., Wakin, M.B., Laska, J.N., Takhar, D., Kelly, K.F., Baraniuk, R.G.: The smashed filter for compressive classification and target recognition. In: Proceedings of Computational Imaging V at SPIE Electronic Imaging, San Jose, CA (January 2007)

    Google Scholar 

  3. Donoho, D.L.: Compressed sensing. IEEE Transactions on Information Theory 52(4), 1289–1306 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  4. Gomez, F.J., Miikkulainen, R.: Robust non-linear control through neuroevolution. Technical Report AI-TR-03-303, Department of Computer Sciences, The University of Texas, Austin, USA (2002)

    Google Scholar 

  5. Gomez, F.J., Schmidhuber, J., Miikkulainen, R.: Efficient Non-linear Control Through Neuroevolution. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 654–662. Springer, Heidelberg (2006)

    Google Scholar 

  6. Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)

    Article  Google Scholar 

  7. Haupt, J., Castro, R., Nowak, R., Fudge, G., Yeh, A.: Compressive sampling for signal classification. In: Proceedings of the 40th Asilomar Conference on Signals, Systems and Computers, pp. 1430–1434 (2006)

    Google Scholar 

  8. Kassahun, Y., de Gea, J., Edgington, M., Metzen, J.H., Kirchner, F.: Accelerating neuroevolutionary methods using a kalman filter. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO), pp. 1397–1404. ACM, New York (2008)

    Chapter  Google Scholar 

  9. Koutník, J., Gomez, F., Schmidhuber, J.: Searching for minimal neural networks in fourier space. In: Baum, E., Hutter, M., Kitzelnmann, E. (eds.) Proceedings of the Third Conference on Artificial General Intelligence (AGI), pp. 61–66. Atlantic Press (2010)

    Google Scholar 

  10. Koutník, J., Gomez, F.J., Schmidhuber, J.: Evolving neural networks in compressed weight space. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO), pp. 619–626. ACM, New York (2010)

    Google Scholar 

  11. Maillard, O., Munos, R.: Compressed least-squares regression. In: Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C.K.I., Culotta, A. (eds.) Advances in Neural Information Processing Systems (NIPS), pp. 1213–1221 (2009)

    Google Scholar 

  12. Schaul, T., Schmidhuber, J.: Towards Practical Universal Search. In: Proceedings of the Third Conference on Artificial General Intelligence (AGI), Lugano (2010)

    Google Scholar 

  13. Velez, D.R., White, B.C., Motsinger, A.A., Bush, W.S., Ritchie, M.D., Williams, S.M., Moore, J.H.: A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduction. Genetic Epidemiology 31(4), 306–315 (2007)

    Article  Google Scholar 

  14. Zander, T.O., Kothe, C.: Towards passive brain computer interfaces: applying brain computer interface technology to human machine systems in general. Journal of Neural Engineering 8(2), 025005 (2011)

    Google Scholar 

  15. Zhou, S., Lafferty, J.D., Wasserman, L.A.: Compressed regression. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S.T. (eds.) Advances in Neural Information Processing Systems (NIPS), Curran Associates, Inc. (2008)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kassahun, Y., Wöhrle, H., Fabisch, A., Tabie, M. (2012). Learning Parameters of Linear Models in Compressed Parameter Space. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33266-1_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33266-1_14

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33265-4

  • Online ISBN: 978-3-642-33266-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics