Robust Discriminative Principal Component Analysis
Least square regression (LSR) and principal component analysis (PCA) are two representative dimensionality reduction algorithms in the fields of machine learning. In this paper, we propose a novel method to jointly learn projections from the subspaces derived from the modified LSR and PCA. To implement simultaneous feature learning, we design a novel joint regression learning model by imposing two orthogonal constraints. Therefore, the learned projections can preserve the minimum reconstruction error and the discriminative information in the low-dimensional subspaces. Besides, since the traditional LSR and PCA are sensitive to the outliers, we utilize the robust L2,1-norm as the metric of loss function to improve the model’s robustness. A simple iterative algorithm is proposed to solve the proposed framework. Experiments on face databases show the promising performance of our method.
KeywordsRegression framework Subspace learning Robustness
This work was supported in part by the Natural Science Foundation of China (Grant 61573248, Grant 61773328, Grant 61773328 and Grant 61703283), Research Grant of The Hong Kong Polytechnic University (Project Code:G-UA2B), China Postdoctoral Science Foundation (Project 2016M590812 and Project 2017T100645), the Guangdong Natural Science Foundation (Project 2017A030313367 and Project 2017A030310067), the Guangdong medical scientific and technological research funding under grant A2017251, Shenzhen Municipal Science and Technology Innovation Council (No. JCYJ20170302153434048, No. JCYJ20160429182058044 and No. JCYJ20160429182058044).
- 7.Nie, F., Huang, H., Ding, C.H.Q., Luo, D., Wang, H.: Robust principal component analysis with non-greedy l1-norm maximization. Presented at the IJCAI Proceedings-International Joint Conference on Artificial Intelligence (2011)Google Scholar
- 9.Ding, C., Zhou, D., He, X., Zha, H.: R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization. In: Proceedings of the 23rd International Conference on Machine Learning, pp. 281–288 (2006)Google Scholar
- 11.Nie, F., Yuan, J., Huang, H.: Optimal mean robust principal component analysis. In: Proceedings of the 31st International Conference on Machine Learning, pp. 1062–1070 (2014)Google Scholar
- 13.Nie, F., Huang, H., Cai, X., Ding, C.: Efficient and robust feature selection via joint L2,1-norms minimization. In: Advances in Neural Information Processing Systems, pp. 1813–1821 (2010)Google Scholar
- 14.Cai, X., Ding, C., Nie, F., Huang, H.: On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1124–1132. ACM (2013)Google Scholar
- 18.Fang, X., Xu, Y., Li, X., Lai, Z., Wong, W.K., Fang, B.: Regularized label relaxation linear regression. IEEE Trans. Neural Netw. Learn. Syst. PP, 1–13 (2017)Google Scholar
- 19.Martinez, A.M.: The AR face database. CVC Technical Report 24 (1998)Google Scholar