Advertisement

Ensembles of Multi-instance Learners

  • Zhi-Hua Zhou
  • Min-Ling Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2837)

Abstract

In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. Through analyzing two famous multi-instance learning algorithms, this paper shows that many supervised learning algorithms can be adapted to multi-instance learning, as long as their focuses are shifted from the discrimination on the instances to the discrimination on the bags. Moreover, considering that ensemble learning paradigms can effectively enhance supervised learners, this paper proposes to build ensembles of multi-instance learners to solve multi-instance problems. Experiments on a real-world benchmark test show that ensemble learning paradigms can significantly enhance multi-instance learners, and the result achieved by EM-DD ensemble exceeds the best result on the benchmark test reported in literature.

Keywords

Inductive Logic Programming Supervise Learning Algorithm Maximum Posterior Probability Diverse Density Predictive Error Rate 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Amar, R.A., Dooly, D.R., Goldman, S.A., Zhang, Q.: Multiple-instance learning of real-valued data. In: Proceedings of the 18th International Conference on Machine Learning, Williamstown, MA, pp. 3–10 (2001)Google Scholar
  2. 2.
    Auer, P.: On learning from multi-instance examples: empirical evaluation of a theoretical approach. In: Proceedings of the 14th International Conference on Machine Learning, Nashville, TN, pp. 21–29 (1997)Google Scholar
  3. 3.
    Auer, P., Long, P.M., Srinivasan, A.: Approximating hyper-rectangles: learning and pseudo-random sets. Journal of Computer and System Sciences 57, 376–388 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Blake, C., Keogh, E., Merz, C.J.: UCI repository of machine learning databases, Department of Information and Computer Science, University of California, Irvine, CA (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
  5. 5.
    Blum, A., Kalai, A.: A note on learning from multiple-instance examples. Machine Learning 30, 23–29 (1998)zbMATHCrossRefGoogle Scholar
  6. 6.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  7. 7.
    Breiman, L.: Bias, variance, and arcing classifiers. Technical Report 460, Statistics Department, University of California, Berkeley, CA (1996) Google Scholar
  8. 8.
    Chevaleyre, Y., Zucker, J.-D.: Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. Application to the mutagenesis problem. In: Stroulia, E., Matwin, S. (eds.) Canadian AI 2001. LNCS (LNAI), vol. 2056, pp. 204–214. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  9. 9.
    De Raedt, L.: Attribute-value learning versus inductive logic programming: the missing links. In: Page, D.L. (ed.) ILP 1998. LNCS (LNAI), vol. 1446, pp. 1–8. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  10. 10.
    Dietterich, T.G.: Machine learning research: four current directions. AI Magazine 18, 97–136 (1997)Google Scholar
  11. 11.
    Dietterich, T.G., Lathrop, R.H., Lozano-Pérez, T.: Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence 89, 31–71 (1997)zbMATHCrossRefGoogle Scholar
  12. 12.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the 2nd European Conference on Computational Learning Theory, Barcelona, Spain, pp. 23–37 (1995)Google Scholar
  13. 13.
    Long, P.M., Tan, L.: PAC learning axis-aligned rectangles with respect to product distributions from multiple-instance examples. Machine Learning 30, 7–21 (1998)zbMATHCrossRefGoogle Scholar
  14. 14.
    Maron, O., Lozano-Pŕez, T.: A framework for multiple-instance learning. In: Jordan, M.I., Kearns, M.J., Solla, S.A. (eds.) Advances in Neural Information Processing Systems, vol. 10, pp. 570–576. MIT Press, Cambridge (1998)Google Scholar
  15. 15.
    Maron, O., Rantan, A.L.: Multiple-instance learning for natural scene classification. In: Proceedings of the 15th International Conference on Machine Learning, Williamstown, MA, pp. 341–349 (2001)Google Scholar
  16. 16.
    Ray, S., Page, D.: Multiple instance regression. In: Proceedings of the 18th International Conference on Machine Learning, Williamstown, MA, pp. 425–432 (2001)Google Scholar
  17. 17.
    Ruffo, G.: Learning single and multiple instance decision trees for computer security applications. PhD dissertation, Department of Computer Science, University of Turin, Torino, Italy (2000)Google Scholar
  18. 18.
    Wang, J., Zucker, J.-D.: Solving the multiple-instance problem: a lazy learning approach. In: Proceedings of the 17th International Conference on Machine Learning, San Francisco, CA, pp. 1119–1125 (2000)Google Scholar
  19. 19.
    Webb, G.I.: MultiBoosting: a technique for combining Boosting and Wagging. Machine Learning 40, 159–196 (2000)CrossRefGoogle Scholar
  20. 20.
    Yang, C., Lozano-Pérez, T.: Image database retrieval with multiple-instance learning techniques. In: Proceedings of the 16th International Conference on Data Engineering, San Diego, CA, pp. 233–243 (2000)Google Scholar
  21. 21.
    Zhang, Q., Goldman, S.A.: EM-DD: an improved multi-instance learning technique. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems, vol. 14, pp. 1073–1080. MIT Press, Cambridge (2002)Google Scholar
  22. 22.
    Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artificial Intelligence 137, 239–263 (2002)zbMATHCrossRefMathSciNetGoogle Scholar
  23. 23.
    Zhou, Z.-H., Zhang, M.-L.: Neural networks for multi-instance learning. In: Proceedings of the International Conference on Intelligent Information Technology, Beijing, China, pp. 455–459 (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Zhi-Hua Zhou
    • 1
  • Min-Ling Zhang
    • 1
  1. 1.National Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina

Personalised recommendations