Advertisement

Multiple Instance Learning Based on Twin Support Vector Machine

  • Divya TomarEmail author
  • Sonali Agarwal
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 553)

Abstract

Each input object in multiple instance learning (MIL) is represented by a set of instances, referred to as ‘bag.’ Therefore, in MIL, class labels are associated with each bag instead of individual instance. This study proposes a classifier for multiple instance learning based on Twin Support Vector Machine, termed as MIL-TWSVM. The proposed approach is trained at bag level, where each bag is represented by a vector of its dissimilarities to other bags in the training set. A comparative analysis of MIL-TWSVM approach is performed with the instance-level and noisy-or (NOR) learning approaches based on TWSVM. The performance of the proposed MIL-TWSVM approach has also been compared with several existing approaches of multiple instance learning. The experiments on eight multiple instance benchmark datasets have shown the superiority of the proposed approach. The significance of experimental results has been tested via statistical analysis conducted by using Friedman’s statistic and Nemenyi post hoc tests.

Keywords

Single instance learning Multiple instance learning Bag dissimilarity Twin support vector machine 

References

  1. 1.
    Dietterich, T. G., Lathrop, R. H., and Lozano-Pérez, T. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, Vol. 89, no. 1, 1997, pp. 31–71.Google Scholar
  2. 2.
    Cheplygina, V., Tax, D. M., and Loog, M. Multiple instance learning with bag dissimilarities. Pattern Recognition, Vol. 48, no. 1, 2015, pp. 264–275.Google Scholar
  3. 3.
    Maron, O., and Lozano-Pérez, T. A framework for multiple-instance learning. Advances in neural information processing systems, Vol. 10, 1998, pp. 570–576.Google Scholar
  4. 4.
    Tax, D. M., Loog, M., Duin, R. P., Cheplygina, V., and Lee, W. J. Bag dissimilarities for multiple instance learning. In Similarity-Based Pattern Recognition, Vol. 7005, 2011, pp. 222–234.Google Scholar
  5. 5.
    Foulds, J., and Frank, E. A review of multi-instance learning assumptions. The Knowledge Engineering Review, Vol. 25, no. 01, 2010, pp. 1–25.Google Scholar
  6. 6.
    Chen, Y., Bi, J., and Wang, J. Z. MILES: Multiple-instance learning via embedded instance selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, no. 12, 2006, pp. 1931–1947.Google Scholar
  7. 7.
    Ramon J., De Raedt L., Multi instance neural networks, In Proceedings of the ICML-2000 Workshop on Attribute-Value and Relational Learning, 2000, pp. 53–60.Google Scholar
  8. 8.
    Wang, H., Nie, F., and Huang, H. Learning Instance Specific Distance for Multi-Instance Classification. In 25th AAAI Conference on Artificial Intelligence, 2011, pp. 507–512.Google Scholar
  9. 9.
    Zhang, Q., and Goldman, S. A. EM-DD: An improved multiple-instance learning technique. In Advances in neural information processing systems, Vol. 14, 2001, pp. 1073–1080.Google Scholar
  10. 10.
    Wang, J., and Zucker, J. D. Solving multiple-instance problem: A lazy learning approach. In Proceedings of the 17th International Conference on Machine Learning, San Francisco, 2000, pp. 1119–1125.Google Scholar
  11. 11.
    Andrews, S., Tsochantaridis, I., and Hofmann, T. Support vector machines for multiple-instance learning. In Advances in neural information processing systems, Vol. 15, 2003, pp. 561–568.Google Scholar
  12. 12.
    P. A. Viola, J. C. Platt, and C. Zhang, Multiple Instance boosting for object detection. In Advances in neural information processing systems (NIPS), Vol. 18, 2006, pp. 1419–1426.Google Scholar
  13. 13.
    Zhou, Z. H., and Zhang, M. L. Neural networks for multi-instance learning. In Proceedings of the International Conference on Intelligent Information Technology, Beijing, China, 2002, pp. 455–459.Google Scholar
  14. 14.
    Xu, X., and Frank, E., Logistic regression and boosting for labeled bags of instances. In Advances in knowledge discovery and data mining, Springer Berlin Heidelberg, 2004, pp. 272–281.Google Scholar
  15. 15.
    Jayadeva, Khemchandani, R., and Chandra, S. Twin support vector machines for pattern classification. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, no. 5, 2007, pp. 905–910.Google Scholar
  16. 16.
    Tomar, D., and Agarwal, S. Twin support vector machine: a review from 2007 to 2014. Egyptian Informatics Journal, Vol. 16, no. 1, 2015, pp. 55–69.Google Scholar
  17. 17.
    Shao, Y. H., Yang Z. X., Wang X. B. and Deng N. Y. Deng. Multiple instance twin support vector machines. Lect Note Oper Res, Vol. 12, 2010, pp. 433–442.Google Scholar
  18. 18.
    multi-instance dataset, http://sci2s.ugr.es/keel/. Accessed on May 2015.
  19. 19.
    Dong, L. A comparison of multi-instance learning algorithms (Doctoral dissertation, The University of Waikato), 2006.Google Scholar
  20. 20.
    Demšar, J. Statistical comparisons of classifiers over multiple data sets. The Journal of Machine Learning Research, Vol. 7, 2006, pp. 1–30.Google Scholar
  21. 21.
    Nemenyi, P. Distribution-free multiple comparisons. Ph.D. Thesis. Princeton University, 1963.Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2017

Authors and Affiliations

  1. 1.Indian Institute of Information TechnologyAllahabadIndia

Personalised recommendations