Abstract
In Multitask Learning (MTL), a task is learned together with other related tasks, producing a transfer of information between them which can be advantageous for learning of the first one. However, rarely can solve a problem under an MTL scheme since no data are available that satisfying the conditions that need a MTL scheme. This paper presents a method to detect related tasks with the main one that allow to implement a multitask learning scheme. The method use the advantages of the Extreme Learning Machine and selects the secondary tasks without testing/error methodologies that increase the computational complexity.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Caruana, R.: Learning many related tasks at the same time with backpropagation. In: Advanced in Neural Information Processing Systems, pp. 656–664 (1995)
Baxter, J.: The evolution of learning algorithms for artificial neural networks, Complex Systems. In: Complex Systems. IOS Press (1995)
Caruana, R.: Multitask Learning. Phd Thesis, School of Computer Science, Carnegie Mellon University, Pittsburg, PA (1997)
Silver, D.L., Mercer, R.E.: Selective functional transfer: Inductive bias from related tasks. In: Proceedings of the IASTED International Conference on Artificial Intelligence and Soft Computing, pp. 182–191 (2001)
Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: optimally pruned extreme learning machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2010)
Pao, Y.H., Park, G.H., Sobajic, D.J.: Learning and Generalization Characteristics of the Random Vector Functional-Link Net. Neurocomputing 6(2), 163–180 (1994)
Igelnik, B., Pao, Y.H.: Stochastic Choice of Basis Functions in Adaptive Function Approximation and the Functional-Link Net. IEEE Transactions on Neural Networks 8(2), 452–454 (1997)
Huang, G.B., Chen, L.: Convex incremental Extreme Learning Machine. Neurocomputing 70, 3056–3062 (2007)
Huang, G.B., Wang, D.H., Lan, Y.: Extreme Learning Machines: A survey. International Journal of Machine Leaning and Cybernetics 2(2), 107–122 (2011)
Huang, G.B., Zhu, Q., Siew, C.K.: Extreme learning machine: Theory and applications. Neurocomputing 70(1-3), 489–501 (2006)
Serre, D.: Matrices: Theory and Applications. Springer (2002)
Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. Wiley (1971)
Rong, H.J., Ong, Y.S., Tan, A.H., Zhu, Z.: A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1-3), 359–366 (2008)
Miche, Y., Bas, P., Jutten, C., Simula, O., Lendasse, A.: A Methodology for Building Regression Models using Extreme Learning Machine: OP-ELM. In: Proceedings of the European Symposium on Artificial Neural Networks (ESANN), pp. 247–252 (2008)
Miche, Y., Sorjamaa, A., Lendasse, A.: OP-ELM: Theory, Experiments and a Toolbox. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 145–154. Springer, Heidelberg (2008)
Miche, Y., Lendasse, A.: A Faster Model Selection Criterion for OP-ELM and OP-KNN: Hannan-Quinn Criterion. In: Proceeding of the European Symposium on Artificial Neural Networks (ESANN), pp. 177–182 (2009)
Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: Optimally Pruned Extreme Learning Machine. IEEE Transactions on Neural Networks 21(1), 158–162 (2009)
Mateo, F., Lendasse, A.: A variable selection approach based on the Delta Test for Extreme Learning Machine models. In: Proceedings of the European Symposium on Time Series Prediction (ESTP), pp. 57–66 (2008)
Similä, T., Tikka, J.: Multiresponse Sparse Regression with Application to Multidimensional Scaling. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds.) ICANN 2005. LNCS, vol. 3697, pp. 97–102. Springer, Heidelberg (2005)
Bueno-Crespo, A., Garca-Laencina, P.J., Sancho-Gmez, J.L.: Neural architecture design based on Extreme Learning Machine. Neural Networks 48, 19–24 (2013)
Caruana, R.: Multitask Connectionist Learning. In: Proceedings of the 1993 Connectionist Models Summer School (1993)
McCracken, P.J.: Selective Representational Transfer Using Stochastic Noise. Honour Thesis, Jodrey School of Computer Science, Acadia University, Wolfville, Nova Scotia, Canada (2003)
Pal, S.K., Majumder, D.D.: Fuzzy sets and decision making approaches in vowel and speaker recognition. IEEE Transactions on Systems, Man, and Cybernetics 7, 625–629 (1977)
Huang, G.B., Ding, X., Zhou, H.: Optimization method based Extreme Learning Machine for classification. Neurocomputing 74, 155–163 (2010)
van Heeswijk, M., Miche, Y., Oja, E., Lendasse, A.: Gpu accelerated and parallelized ELM ensembles for large-scale regression. Neurocomputing 74(16), 2430–2437 (2011)
Tapson, J., van Schaik, A.: Learning the pseudoinverse solution to network weights. Neural Networks (March 13, 2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Bueno-Crespo, A., Menchón-Lara, RM., Sancho-Gómez, JL. (2015). Related Tasks Selection to Multitask Learning Schemes. In: Ferrández Vicente, J., Álvarez-Sánchez, J., de la Paz López, F., Toledo-Moreo, F., Adeli, H. (eds) Bioinspired Computation in Artificial Systems. IWINAC 2015. Lecture Notes in Computer Science(), vol 9108. Springer, Cham. https://doi.org/10.1007/978-3-319-18833-1_23
Download citation
DOI: https://doi.org/10.1007/978-3-319-18833-1_23
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-18832-4
Online ISBN: 978-3-319-18833-1
eBook Packages: Computer ScienceComputer Science (R0)