Abstract
In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. Through analyzing two famous multi-instance learning algorithms, this paper shows that many supervised learning algorithms can be adapted to multi-instance learning, as long as their focuses are shifted from the discrimination on the instances to the discrimination on the bags. Moreover, considering that ensemble learning paradigms can effectively enhance supervised learners, this paper proposes to build ensembles of multi-instance learners to solve multi-instance problems. Experiments on a real-world benchmark test show that ensemble learning paradigms can significantly enhance multi-instance learners, and the result achieved by EM-DD ensemble exceeds the best result on the benchmark test reported in literature.
Chapter PDF
Similar content being viewed by others
Keywords
- Inductive Logic Programming
- Supervise Learning Algorithm
- Maximum Posterior Probability
- Diverse Density
- Predictive Error Rate
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Amar, R.A., Dooly, D.R., Goldman, S.A., Zhang, Q.: Multiple-instance learning of real-valued data. In: Proceedings of the 18th International Conference on Machine Learning, Williamstown, MA, pp. 3–10 (2001)
Auer, P.: On learning from multi-instance examples: empirical evaluation of a theoretical approach. In: Proceedings of the 14th International Conference on Machine Learning, Nashville, TN, pp. 21–29 (1997)
Auer, P., Long, P.M., Srinivasan, A.: Approximating hyper-rectangles: learning and pseudo-random sets. Journal of Computer and System Sciences 57, 376–388 (1998)
Blake, C., Keogh, E., Merz, C.J.: UCI repository of machine learning databases, Department of Information and Computer Science, University of California, Irvine, CA (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
Blum, A., Kalai, A.: A note on learning from multiple-instance examples. Machine Learning 30, 23–29 (1998)
Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)
Breiman, L.: Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, Berkeley, CA (1996)
Chevaleyre, Y., Zucker, J.-D.: Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. Application to the mutagenesis problem. In: Stroulia, E., Matwin, S. (eds.) Canadian AI 2001. LNCS (LNAI), vol. 2056, pp. 204–214. Springer, Heidelberg (2001)
De Raedt, L.: Attribute-value learning versus inductive logic programming: the missing links. In: Page, D.L. (ed.) ILP 1998. LNCS (LNAI), vol. 1446, pp. 1–8. Springer, Heidelberg (1998)
Dietterich, T.G.: Machine learning research: four current directions. AI Magazine 18, 97–136 (1997)
Dietterich, T.G., Lathrop, R.H., Lozano-Pérez, T.: Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence 89, 31–71 (1997)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the 2nd European Conference on Computational Learning Theory, Barcelona, Spain, pp. 23–37 (1995)
Long, P.M., Tan, L.: PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. Machine Learning 30, 7–21 (1998)
Maron, O., Lozano-Pŕez, T.: A framework for multiple-instance learning. In: Jordan, M.I., Kearns, M.J., Solla, S.A. (eds.) Advances in Neural Information Processing Systems, vol. 10, pp. 570–576. MIT Press, Cambridge (1998)
Maron, O., Rantan, A.L.: Multiple-instance learning for natural scene classification. In: Proceedings of the 15th International Conference on Machine Learning, Williamstown, MA, pp. 341–349 (2001)
Ray, S., Page, D.: Multiple instance regression. In: Proceedings of the 18th International Conference on Machine Learning, Williamstown, MA, pp. 425–432 (2001)
Ruffo, G.: Learning single and multiple instance decision trees for computer security applications. PhD dissertation, Department of Computer Science, University of Turin, Torino, Italy (2000)
Wang, J., Zucker, J.-D.: Solving the multiple-instance problem: a lazy learning approach. In: Proceedings of the 17th International Conference on Machine Learning, San Francisco, CA, pp. 1119–1125 (2000)
Webb, G.I.: MultiBoosting: a technique for combining Boosting and Wagging. Machine Learning 40, 159–196 (2000)
Yang, C., Lozano-Pérez, T.: Image database retrieval with multiple-instance learning techniques. In: Proceedings of the 16th International Conference on Data Engineering, San Diego, CA, pp. 233–243 (2000)
Zhang, Q., Goldman, S.A.: EM-DD: an improved multi-instance learning technique. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems, vol. 14, pp. 1073–1080. MIT Press, Cambridge (2002)
Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artificial Intelligence 137, 239–263 (2002)
Zhou, Z.-H., Zhang, M.-L.: Neural networks for multi-instance learning. In: Proceedings of the International Conference on Intelligent Information Technology, Beijing, China, pp. 455–459 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhou, ZH., Zhang, ML. (2003). Ensembles of Multi-instance Learners. In: Lavrač, N., Gamberger, D., Blockeel, H., Todorovski, L. (eds) Machine Learning: ECML 2003. ECML 2003. Lecture Notes in Computer Science(), vol 2837. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39857-8_44
Download citation
DOI: https://doi.org/10.1007/978-3-540-39857-8_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-20121-2
Online ISBN: 978-3-540-39857-8
eBook Packages: Springer Book Archive