Skip to main content

Related Tasks Selection to Multitask Learning Schemes

  • Conference paper
Bioinspired Computation in Artificial Systems (IWINAC 2015)

Abstract

In Multitask Learning (MTL), a task is learned together with other related tasks, producing a transfer of information between them which can be advantageous for learning of the first one. However, rarely can solve a problem under an MTL scheme since no data are available that satisfying the conditions that need a MTL scheme. This paper presents a method to detect related tasks with the main one that allow to implement a multitask learning scheme. The method use the advantages of the Extreme Learning Machine and selects the secondary tasks without testing/error methodologies that increase the computational complexity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Caruana, R.: Learning many related tasks at the same time with backpropagation. In: Advanced in Neural Information Processing Systems, pp. 656–664 (1995)

    Google Scholar 

  2. Baxter, J.: The evolution of learning algorithms for artificial neural networks, Complex Systems. In: Complex Systems. IOS Press (1995)

    Google Scholar 

  3. Caruana, R.: Multitask Learning. Phd Thesis, School of Computer Science, Carnegie Mellon University, Pittsburg, PA (1997)

    Google Scholar 

  4. Silver, D.L., Mercer, R.E.: Selective functional transfer: Inductive bias from related tasks. In: Proceedings of the IASTED International Conference on Artificial Intelligence and Soft Computing, pp. 182–191 (2001)

    Google Scholar 

  5. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: optimally pruned extreme learning machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2010)

    Article  Google Scholar 

  6. Pao, Y.H., Park, G.H., Sobajic, D.J.: Learning and Generalization Characteristics of the Random Vector Functional-Link Net. Neurocomputing 6(2), 163–180 (1994)

    Article  Google Scholar 

  7. Igelnik, B., Pao, Y.H.: Stochastic Choice of Basis Functions in Adaptive Function Approximation and the Functional-Link Net. IEEE Transactions on Neural Networks 8(2), 452–454 (1997)

    Google Scholar 

  8. Huang, G.B., Chen, L.: Convex incremental Extreme Learning Machine. Neurocomputing 70, 3056–3062 (2007)

    Article  Google Scholar 

  9. Huang, G.B., Wang, D.H., Lan, Y.: Extreme Learning Machines: A survey. International Journal of Machine Leaning and Cybernetics 2(2), 107–122 (2011)

    Article  Google Scholar 

  10. Huang, G.B., Zhu, Q., Siew, C.K.: Extreme learning machine: Theory and applications. Neurocomputing 70(1-3), 489–501 (2006)

    Article  Google Scholar 

  11. Serre, D.: Matrices: Theory and Applications. Springer (2002)

    Google Scholar 

  12. Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. Wiley (1971)

    Google Scholar 

  13. Rong, H.J., Ong, Y.S., Tan, A.H., Zhu, Z.: A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1-3), 359–366 (2008)

    Article  Google Scholar 

  14. Miche, Y., Bas, P., Jutten, C., Simula, O., Lendasse, A.: A Methodology for Building Regression Models using Extreme Learning Machine: OP-ELM. In: Proceedings of the European Symposium on Artificial Neural Networks (ESANN), pp. 247–252 (2008)

    Google Scholar 

  15. Miche, Y., Sorjamaa, A., Lendasse, A.: OP-ELM: Theory, Experiments and a Toolbox. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 145–154. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  16. Miche, Y., Lendasse, A.: A Faster Model Selection Criterion for OP-ELM and OP-KNN: Hannan-Quinn Criterion. In: Proceeding of the European Symposium on Artificial Neural Networks (ESANN), pp. 177–182 (2009)

    Google Scholar 

  17. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: Optimally Pruned Extreme Learning Machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2009)

    Article  Google Scholar 

  18. Mateo, F., Lendasse, A.: A variable selection approach based on the Delta Test for Extreme Learning Machine models. In: Proceedings of the European Symposium on Time Series Prediction (ESTP), pp. 57–66 (2008)

    Google Scholar 

  19. Similä, T., Tikka, J.: Multiresponse Sparse Regression with Application to Multidimensional Scaling. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 97–102. Springer, Heidelberg (2005)

    Google Scholar 

  20. Bueno-Crespo, A., Garca-Laencina, P.J., Sancho-Gmez, J.L.: Neural architecture design based on Extreme Learning Machine. Neural Networks 48, 19–24 (2013)

    Article  Google Scholar 

  21. Caruana, R.: Multitask Connectionist Learning. In: Proceedings of the 1993 Connectionist Models Summer School (1993)

    Google Scholar 

  22. McCracken, P.J.: Selective Representational Transfer Using Stochastic Noise. Honour Thesis, Jodrey School of Computer Science, Acadia University, Wolfville, Nova Scotia, Canada (2003)

    Google Scholar 

  23. Pal, S.K., Majumder, D.D.: Fuzzy sets and decision making approaches in vowel and speaker recognition. IEEE Transactions on Systems, Man, and Cybernetics 7, 625–629 (1977)

    Article  MATH  Google Scholar 

  24. Huang, G.B., Ding, X., Zhou, H.: Optimization method based Extreme Learning Machine for classification. Neurocomputing 74, 155–163 (2010)

    Article  Google Scholar 

  25. van Heeswijk, M., Miche, Y., Oja, E., Lendasse, A.: Gpu accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing 74(16), 2430–2437 (2011)

    Article  Google Scholar 

  26. Tapson, J., van Schaik, A.: Learning the pseudoinverse solution to network weights. Neural Networks (March 13, 2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrés Bueno-Crespo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Bueno-Crespo, A., Menchón-Lara, RM., Sancho-Gómez, JL. (2015). Related Tasks Selection to Multitask Learning Schemes. In: Ferrández Vicente, J., Álvarez-Sánchez, J., de la Paz López, F., Toledo-Moreo, F., Adeli, H. (eds) Bioinspired Computation in Artificial Systems. IWINAC 2015. Lecture Notes in Computer Science(), vol 9108. Springer, Cham. https://doi.org/10.1007/978-3-319-18833-1_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-18833-1_23

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-18832-4

  • Online ISBN: 978-3-319-18833-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics