Advertisement

Robust Semi-supervised Multi-label Learning by Triple Low-Rank Regularization

  • Lijuan Sun
  • Songhe FengEmail author
  • Gengyu Lyu
  • Congyan Lang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11440)

Abstract

Multi-Label Learning (MLL) deals with the problem when one instance is associated with multiple labels simultaneously. Previous methods have shown promising performance by effectively exploiting the semantic correlations among different labels. However, most of the existing methods may not be robust to the situation when the training instances are labeled with noisy or incomplete labels, which are common in reality. In this paper, we propose Robust Semi-Supervised Multi-Label Learning by Triple Low-Rank Regularization approach to address this problem. Specifically, a linear self-representative model is firstly introduced to recover the possibly noisy label matrix by exploiting the label correlations. Then, our method develops a low-rank pairwise similarity matrix to capture the global relationships among labeled and unlabeled samples by taking advantage of Low-Rank Representation (LRR). In addition, by utilizing the pairwise similarity matrix defined above, we construct the graph Laplacian regularization to acquire geometric structural information from both labeled and unlabeled samples. Moreover, the proposed method concatenate the prediction models for different labels into a matrix, and introduces the matrix trace norm to capture the correlations and control the model complexity. Experimental studies across a wide range of benchmark datasets show that our method achieves highly competitive performance against other state-of-the-art approaches.

Keywords

Multi-label learning Triple low-rank regularization Semi-supervised learning Graph Laplacian regularization 

Notes

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (Nos. 61872032), in part by the Fundamental Research Funds for the Central universities (2018YJS038, 2017JBZ108).

References

  1. 1.
    Boutell, M., Luo, J., Shen, X., Brown, C.: Learning multi-label scene classification. Pattern Recogn. 37(9), 1757–1771 (2004)CrossRefGoogle Scholar
  2. 2.
    Bucak, S., Jin, R., Jain, A.: Multi-label learning with incomplete class assignments. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2801–2808 (2011)Google Scholar
  3. 3.
    Cheng, W., Hullermeier, E.: Combining instance-based learning and logistic regression for multilabel classification. Mach. Learn. 76(2), 211–225 (2009)CrossRefGoogle Scholar
  4. 4.
    Chua, T., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: NUS-WIDE: a real-world web image database from National University of Singapore. In: ACM International Conference on Image and Video Retrieval, p. 48 (2009)Google Scholar
  5. 5.
    Feng, S., Lang, C.: Graph regularized low-rank feature mapping for multi-label learning with application to image annotation. Multidimension. Syst. Sig. Process. 11, 1–22 (2017)Google Scholar
  6. 6.
    Feng, Z., Jin, R., Anil, J.: Large-scale image annotation by efficient and robust Kernel metric learning. In: IEEE International Conference on Computer Vision, pp. 1609–1616 (2013)Google Scholar
  7. 7.
    Furnkranz, J., Hullermeier, E., Mencia, E., Brinker, K.: Multilabel classification via calibrated label ranking. Mach. Learn. 73(2), 133–153 (2008)CrossRefGoogle Scholar
  8. 8.
    Ji, S., Tang, L., Yu, S., Ye, J.: Extracting shared subspace for multi-label classification. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 381–389 (2008)Google Scholar
  9. 9.
    Ji, S., Ye, J.: An accelerated gradient method for trace norm minimization. In: Proceedings of Annual International Conference on Machine Learning, pp. 457–464 (2009)Google Scholar
  10. 10.
    Matthieu, G., Thomas, M., Jakob, V., Cordelia, S.: Tagprop: discriminative metric learning in nearest neighbor models for image auto-annotation. In: IEEE International Conference on Computer Vision, pp. 309–316 (2009)Google Scholar
  11. 11.
    Montanes, E., Senge, R., Barranquero, J., Quevedo, J., Coz, J., Hullermeier, E.: Dependent binary relevance models for multi-label classification. Pattern Recogn. 47(3), 1494–1508 (2014)CrossRefGoogle Scholar
  12. 12.
    Sheng, L., Yun, F.: Robust multi-label semi-supervised classification. In: IEEE International Conference on Big Data, pp. 27–36 (2017)Google Scholar
  13. 13.
    Wang, X., Feng, S., Lang, C.: Semi-supervised dual low-rank feature mapping for multi-label image annotation. Multimed. Tools Appl. 8, 1–20 (2018)Google Scholar
  14. 14.
    Xu, M., Jin, R., Zhou, Z.: Speedup matrix completion with side information: application to multi-label learning. In: Advances in Neural Information Processing Systems, pp. 2301–2309 (2013)Google Scholar
  15. 15.
    Yin, M., Gao, J., Lin, Z.: Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 504–517 (2016)CrossRefGoogle Scholar
  16. 16.
    Zhang, M., Wu, L.: Lift: multi-label learning with label-specific features. IEEE Trans. Pattern Anal. Mach. Intell. 37(1), 107–120 (2015)CrossRefGoogle Scholar
  17. 17.
    Zhang, M., Zhou, Z.: ML-KNN: a lazy learning approach to multi-label learning. Pattern Recogn. 40(7), 2038–2048 (2007)CrossRefGoogle Scholar
  18. 18.
    Zhang, M., Zhou, Z.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 26(8), 1819–1837 (2014)CrossRefGoogle Scholar
  19. 19.
    Zhang, Y., Zhang, Z., Qin, J., Zhang, L., Li, B., Li, F.: Semi-supervised local multi-manifold isomap by linear embedding for feature extraction. Pattern Recogn. 76, 662–678 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Lijuan Sun
    • 1
  • Songhe Feng
    • 1
    Email author
  • Gengyu Lyu
    • 1
  • Congyan Lang
    • 1
  1. 1.School of Computer and Information TechnologyBeijing Jiaotong UniversityBeijingChina

Personalised recommendations