A Deep Interpretation of Classifier Chains
In the “classifier chains” (CC) approach for multi-label classification, the predictions of binary classifiers are cascaded along a chain as additional features. This method has attained high predictive performance, and is receiving increasing analysis and attention in the recent multi-label literature, although a deep understanding of its performance is still taking shape. In this paper, we show that CC gets predictive power from leveraging labels as additional stochastic features, contrasting with many other methods, such as stacking and error correcting output codes, which use label dependence only as kind of regularization. CC methods can learn a concept which these cannot, even supposing the same base classifier and hypothesis space. This leads us to connections with deep learning (indeed, we show that CC is competitive precisely because it is a deep learner), and we employ deep learning methods – showing that they can supplement or even replace a classifier chain. Results are convincing, and throw new insight into promising future directions.
KeywordsBinary Rele Extreme Learning Machine Deep Learning Hypothesis Space Restricted Boltzmann Machine
Unable to display preview. Download preview PDF.
- 1.Barber, D.: Bayesian Reasoning and Machine Learning. Cambridge University Press (2012)Google Scholar
- 2.Dembczyński, K., Cheng, W., Hüllermeier, E.: Bayes optimal multilabel classification via probabilistic classifier chains. In: ICML 2010: 27th International Conference on Machine Learning, pp. 279–286. Omni Press, Haifa (2010)Google Scholar
- 5.Dembczyński, K., Waegeman, W., Hüllermeier, E.: An analysis of chaining in multi-label classification. In: ECAI: European Conference of Artificial Intelligence. Frontiers in Artificial Intelligence and Applications, vol. 242, pp. 294–299. IOS Press (2012)Google Scholar
- 6.Ghani, R.: Using error-correcting codes for text classification. In: ICML 2000: 17th International Conference on Machine Learning, pp. 303–310. Morgan Kaufmann Publishers, Stanford (2000)Google Scholar
- 13.Minsky, M., Papert, S.: Perceptrons — An introduction to Computational Geometry. The MIT Press (1969)Google Scholar
- 15.Read, J., Martino, L., Luengo, D.: Efficient monte carlo methods for multi-dimensional learning with classifier chains. Pattern Recognition 47(3) (2014)Google Scholar
- 17.Rumelhart, D.E., McClelland, J.L., Research Group, P.D.P. (eds.): Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. MIT Press, Cambridge (1986)Google Scholar
- 20.Zaragoza, J.H., Sucar, L.E., Morales, E.F., Bielza, C., Larrañaga, P.: Bayesian chain classifiers for multidimensional classification. In: 24th International Conference on Artificial Intelligence (IJCAI 2011), pp. 2192–2197 (2011)Google Scholar