Robust Semi-supervised Multi-label Learning by Triple Low-Rank Regularization
Multi-Label Learning (MLL) deals with the problem when one instance is associated with multiple labels simultaneously. Previous methods have shown promising performance by effectively exploiting the semantic correlations among different labels. However, most of the existing methods may not be robust to the situation when the training instances are labeled with noisy or incomplete labels, which are common in reality. In this paper, we propose Robust Semi-Supervised Multi-Label Learning by Triple Low-Rank Regularization approach to address this problem. Specifically, a linear self-representative model is firstly introduced to recover the possibly noisy label matrix by exploiting the label correlations. Then, our method develops a low-rank pairwise similarity matrix to capture the global relationships among labeled and unlabeled samples by taking advantage of Low-Rank Representation (LRR). In addition, by utilizing the pairwise similarity matrix defined above, we construct the graph Laplacian regularization to acquire geometric structural information from both labeled and unlabeled samples. Moreover, the proposed method concatenate the prediction models for different labels into a matrix, and introduces the matrix trace norm to capture the correlations and control the model complexity. Experimental studies across a wide range of benchmark datasets show that our method achieves highly competitive performance against other state-of-the-art approaches.
KeywordsMulti-label learning Triple low-rank regularization Semi-supervised learning Graph Laplacian regularization
This work was supported in part by the National Natural Science Foundation of China (Nos. 61872032), in part by the Fundamental Research Funds for the Central universities (2018YJS038, 2017JBZ108).
- 2.Bucak, S., Jin, R., Jain, A.: Multi-label learning with incomplete class assignments. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2801–2808 (2011)Google Scholar
- 4.Chua, T., Tang, J., Hong, R., Li, H., Luo, Z., Zheng, Y.: NUS-WIDE: a real-world web image database from National University of Singapore. In: ACM International Conference on Image and Video Retrieval, p. 48 (2009)Google Scholar
- 5.Feng, S., Lang, C.: Graph regularized low-rank feature mapping for multi-label learning with application to image annotation. Multidimension. Syst. Sig. Process. 11, 1–22 (2017)Google Scholar
- 6.Feng, Z., Jin, R., Anil, J.: Large-scale image annotation by efficient and robust Kernel metric learning. In: IEEE International Conference on Computer Vision, pp. 1609–1616 (2013)Google Scholar
- 8.Ji, S., Tang, L., Yu, S., Ye, J.: Extracting shared subspace for multi-label classification. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 381–389 (2008)Google Scholar
- 9.Ji, S., Ye, J.: An accelerated gradient method for trace norm minimization. In: Proceedings of Annual International Conference on Machine Learning, pp. 457–464 (2009)Google Scholar
- 10.Matthieu, G., Thomas, M., Jakob, V., Cordelia, S.: Tagprop: discriminative metric learning in nearest neighbor models for image auto-annotation. In: IEEE International Conference on Computer Vision, pp. 309–316 (2009)Google Scholar
- 12.Sheng, L., Yun, F.: Robust multi-label semi-supervised classification. In: IEEE International Conference on Big Data, pp. 27–36 (2017)Google Scholar
- 13.Wang, X., Feng, S., Lang, C.: Semi-supervised dual low-rank feature mapping for multi-label image annotation. Multimed. Tools Appl. 8, 1–20 (2018)Google Scholar
- 14.Xu, M., Jin, R., Zhou, Z.: Speedup matrix completion with side information: application to multi-label learning. In: Advances in Neural Information Processing Systems, pp. 2301–2309 (2013)Google Scholar