Learning Parameters of Linear Models in Compressed Parameter Space

  • Yohannes Kassahun
  • Hendrik Wöhrle
  • Alexander Fabisch
  • Marc Tabie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7553)


We present a novel method of reducing the training time by learning parameters of a model at hand in compressed parameter space. In compressed parameter space the parameters of the model are represented by fewer parameters, and hence training can be faster. After training, the parameters of the model can be generated from the parameters in compressed parameter space. We show that for supervised learning, learning the parameters of a model in compressed parameter space is equivalent to learning parameters of the model in compressed input space. We have applied our method to a supervised learning domain and show that a solution can be obtained at much faster speed than learning in uncompressed parameter space. For reinforcement learning, we show empirically that searching directly the parameters of a policy in compressed parameter space accelerates learning.


Compressed Sensing Supervised Learning Reinforcement Learning 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Calderbank, R., Jafarpour, S., Schapire, R.: Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. Technical report (2009)Google Scholar
  2. 2.
    Davenport, M.A., Duarte, M.F., Wakin, M.B., Laska, J.N., Takhar, D., Kelly, K.F., Baraniuk, R.G.: The smashed filter for compressive classification and target recognition. In: Proceedings of Computational Imaging V at SPIE Electronic Imaging, San Jose, CA (January 2007)Google Scholar
  3. 3.
    Donoho, D.L.: Compressed sensing. IEEE Transactions on Information Theory 52(4), 1289–1306 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  4. 4.
    Gomez, F.J., Miikkulainen, R.: Robust non-linear control through neuroevolution. Technical Report AI-TR-03-303, Department of Computer Sciences, The University of Texas, Austin, USA (2002)Google Scholar
  5. 5.
    Gomez, F.J., Schmidhuber, J., Miikkulainen, R.: Efficient Non-linear Control Through Neuroevolution. In: Fürnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) ECML 2006. LNCS (LNAI), vol. 4212, pp. 654–662. Springer, Heidelberg (2006)Google Scholar
  6. 6.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation 9(2), 159–195 (2001)CrossRefGoogle Scholar
  7. 7.
    Haupt, J., Castro, R., Nowak, R., Fudge, G., Yeh, A.: Compressive sampling for signal classification. In: Proceedings of the 40th Asilomar Conference on Signals, Systems and Computers, pp. 1430–1434 (2006)Google Scholar
  8. 8.
    Kassahun, Y., de Gea, J., Edgington, M., Metzen, J.H., Kirchner, F.: Accelerating neuroevolutionary methods using a kalman filter. In: Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation (GECCO), pp. 1397–1404. ACM, New York (2008)CrossRefGoogle Scholar
  9. 9.
    Koutník, J., Gomez, F., Schmidhuber, J.: Searching for minimal neural networks in fourier space. In: Baum, E., Hutter, M., Kitzelnmann, E. (eds.) Proceedings of the Third Conference on Artificial General Intelligence (AGI), pp. 61–66. Atlantic Press (2010)Google Scholar
  10. 10.
    Koutník, J., Gomez, F.J., Schmidhuber, J.: Evolving neural networks in compressed weight space. In: Proceedings of Genetic and Evolutionary Computation Conference (GECCO), pp. 619–626. ACM, New York (2010)Google Scholar
  11. 11.
    Maillard, O., Munos, R.: Compressed least-squares regression. In: Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C.K.I., Culotta, A. (eds.) Advances in Neural Information Processing Systems (NIPS), pp. 1213–1221 (2009)Google Scholar
  12. 12.
    Schaul, T., Schmidhuber, J.: Towards Practical Universal Search. In: Proceedings of the Third Conference on Artificial General Intelligence (AGI), Lugano (2010)Google Scholar
  13. 13.
    Velez, D.R., White, B.C., Motsinger, A.A., Bush, W.S., Ritchie, M.D., Williams, S.M., Moore, J.H.: A balanced accuracy function for epistasis modeling in imbalanced datasets using multifactor dimensionality reduction. Genetic Epidemiology 31(4), 306–315 (2007)CrossRefGoogle Scholar
  14. 14.
    Zander, T.O., Kothe, C.: Towards passive brain computer interfaces: applying brain computer interface technology to human machine systems in general. Journal of Neural Engineering 8(2), 025005 (2011)Google Scholar
  15. 15.
    Zhou, S., Lafferty, J.D., Wasserman, L.A.: Compressed regression. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S.T. (eds.) Advances in Neural Information Processing Systems (NIPS), Curran Associates, Inc. (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Yohannes Kassahun
    • 1
  • Hendrik Wöhrle
    • 2
  • Alexander Fabisch
    • 1
  • Marc Tabie
    • 1
  1. 1.Robotics GroupUniversity of BremenBremenGermany
  2. 2.Robotics Innovation CenterDFKI GmbHBremenGermany

Personalised recommendations