Sparsity Preserving Canonical Correlation Analysis

  • Chen Zu
  • Daoqiang Zhang
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 321)


Canonical correlation analysis (CCA) acts as a well-known tool to analyze the underlying dependency between the observed samples in multiple views of data. Recently, a locality-preserving CCA, called LPCCA, has been developed to incorporate the neighborhood information into CCA. However, both CCA and LPCCA are unsupervised methods which do not take class label information into account. In this paper, we propose an alternative formulation for integrating both the neighborhood information and the discriminative information into CCA and derive a new method called Sparsity Preserving Canonical Correlation Analysis (SPCCA). In SPCCA, besides considering the correlation between two views from the same sample, the cross correlations between two views respectively from different within-class samples, which are automatically determined by performing sparse representation, are also used to achieve good performance. The experimental results on a series of data sets validate the effectiveness of the proposed method.


Canonical correlation analysis (CCA) sparse representation locality preserving feature extraction multi-view dimensionality reduction 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hotelling, H.: Relations Between Two Sets of Variates. Biometrica 28, 322–377 (1936)Google Scholar
  2. 2.
    Sun, Q., Zeng, S., Liu, Y., Heng, P., Xia, D.: A new method of feature fusion and its application in image recognition. Pattern Recognition 38, 2437–2448 (2005)CrossRefGoogle Scholar
  3. 3.
    Hel-Or, Y.: The Cannoical Correlations of Color Images and their use for Demosaicing. Technical report, HP Laboratories Israel (2004)Google Scholar
  4. 4.
    Hardoon, D.R., Szedmák, S., Shawe-Taylor, J.: Canonical Correlation Analysis: An Overview with Application to Learning Methods. Neural Computation 16, 2639–2664 (2004)zbMATHCrossRefGoogle Scholar
  5. 5.
    Abraham, B., Merola, G.: Dimensionality reduction approach to multivariate prediction. Computational Statistics & Data Analysis 48, 5–16 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Sun, T., Chen, S., Yang, J., Shi, P.: A Novel Method of Combined Feature Extraction for Recognition. In: Proceedings of the International Conference on Data Mining, pp. 1043–1048. IEEE Press, Piscataway (2008)Google Scholar
  7. 7.
    Kambhatla, N., Leen, T.K.: Dimension Reduction by Local Principal Component Analysis. Neural Computation 9, 1493–1516 (1997)CrossRefGoogle Scholar
  8. 8.
    Roweis, S.T., Saul, L.K.: Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 290, 2323–2326 (2000)CrossRefGoogle Scholar
  9. 9.
    He, X., Niyogi, P.: Locality Preserving Projections. In: 17th Annual Conference on Neural Information Processing Systems, pp. 153–160. MIT Press, Cambridge (2004)Google Scholar
  10. 10.
    Sun, T., Chen, S.: Locality preserving CCA with applications to data visualization and pose estimation. Image and Vision Computing 25, 531–543 (2007)zbMATHCrossRefGoogle Scholar
  11. 11.
    Qiao, L., Chen, S., Tan, X.: Sparsity preserving projections with applications to face recognition. Pattern Recognition 43, 331–341 (2010)zbMATHCrossRefGoogle Scholar
  12. 12.
    Hou, S., Sun, Q.: Sparsity Preserving Canonical Correlation Analysis with Application in Feature Fusion. Acta Automatica Sinica 38, 659–665 (2012)Google Scholar
  13. 13.
    Hoegaerts, L., Suykens, J.A.K., Vandewalle, J., Moor, B.D.: Subset based least squares subspace regression in RKHS. Neurocomputing 63, 293–323 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Chen Zu
    • 1
  • Daoqiang Zhang
    • 1
  1. 1.Department of Computer Science and EngineeringNanjing University of Aeronautics and AstronauticsNanjingChina

Personalised recommendations