Advertisement

Robust and Efficient Subspace Segmentation via Least Squares Regression

  • Can-Yi Lu
  • Hai Min
  • Zhong-Qiu Zhao
  • Lin Zhu
  • De-Shuang Huang
  • Shuicheng Yan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7578)

Abstract

This paper studies the subspace segmentation problem which aims to segment data drawn from a union of multiple linear subspaces. Recent works by using sparse representation, low rank representation and their extensions attract much attention. If the subspaces from which the data drawn are independent or orthogonal, they are able to obtain a block diagonal affinity matrix, which usually leads to a correct segmentation. The main differences among them are their objective functions. We theoretically show that if the objective function satisfies some conditions, and the data are sufficiently drawn from independent subspaces, the obtained affinity matrix is always block diagonal. Furthermore, the data sampling can be insufficient if the subspaces are orthogonal. Some existing methods are all special cases. Then we present the Least Squares Regression (LSR) method for subspace segmentation. It takes advantage of data correlation, which is common in real data. LSR encourages a grouping effect which tends to group highly correlated data together. Experimental results on the Hopkins 155 database and Extended Yale Database B show that our method significantly outperforms state-of-the-art methods. Beyond segmentation accuracy, all experiments demonstrate that LSR is much more efficient.

Keywords

Sparse Representation Spectral Cluster Little Square Regression Segmentation Accuracy Motion Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hong, W., Wright, J., Huang, K., Ma, Y.: Multiscale hybrid linear models for lossy image representation. IEEE Transactions on Image Processing 15(12), 3655–3671 (2006)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Ho, J., Yang, M.-H., Lim, J., Lee, K.-C., Kriegman, D.: Clustering appearances of objects under varying illumination conditions. In: IEEE Conference on Computer Vision and Pattern Recognition, vol. 1 (2003)Google Scholar
  3. 3.
    Kanatani, K.: Motion segmentation by subspace separation and model selection. In: International Conference on Computer Vision, vol. 2, pp. 586–591 (2001)Google Scholar
  4. 4.
    Yan, J., Pollefeys, M.: A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3954, pp. 94–106. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Costeira, J.P., Kanade, T.: A multibody factorization method for independently moving objects. International Journal of Computer Vision 29(3), 159–179 (1998)CrossRefGoogle Scholar
  6. 6.
    Vidal, R., Ma, Y., Sastry, S.: Generalized principal component analysis (gpca). IEEE Transactions on Pattern Analysis and Machine Intelligence 27(12), 1945–1959 (2005)CrossRefGoogle Scholar
  7. 7.
    Bradley, P.S., Mangasarian, O.L.: k-plane clustering. Journal of Global Optimization 16(1), 23–32 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Zhang, T., Szlam, A., Lerman, G.: Median k-flats for hybrid linear modeling with many outliers. In: International Conference on Computer Vision Workshops, pp. 234–241 (2009)Google Scholar
  9. 9.
    Tipping, M.E., Bishop, C.M.: Mixtures of probabilistic principal component analyzers. Neural Computation 11(2), 443 (1999)CrossRefGoogle Scholar
  10. 10.
    Ma, Y., Derksen, H., Hong, W., Wright, J.: Segmentation of multivariate mixed data via lossy data coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(9), 1546–1562 (2007)CrossRefGoogle Scholar
  11. 11.
    Elhamifar, E., Vidal, R.: Sparse subspace clustering. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2797 (2009)Google Scholar
  12. 12.
    Liu, G., Lin, Z., Yu, Y.: Robust subspace segmentation by low-rank representation. In: International Conference on Machine Learning, pp. 663–670 (2010)Google Scholar
  13. 13.
    Luo, D., Nie, F., Ding, C., Huang, H.: Multi-Subspace Representation and Discovery. In: Gunopulos, D., Hofmann, T., Malerba, D., Vazirgiannis, M. (eds.) ECML PKDD 2011, Part II. LNCS(LNAI), vol. 6912, pp. 405–420. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Wang, S., Yuan, X., Yao, T., Yan, S., Shen, J.: Efficient subspace segmentation via quadratic programming. In: Innovative Applications of Artificial Intelligence Conference, vol. 1, pp. 519–524 (2011)Google Scholar
  15. 15.
    Vidal, R.: A tutorial on subspace clustering. IEEE Signal Processing Magazine (2010)Google Scholar
  16. 16.
    Elhamifar, E., Vidal, R.: Clustering disjoint subspaces via sparse representation. In: 2010 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2010, March 14-19, pp. 1926–1929 (2010)Google Scholar
  17. 17.
    Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. Arxiv preprint arXiv:1010.2955 (2010)Google Scholar
  18. 18.
    Donoho, D.L.: For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics 59(6), 797–829 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B-Statistical Methodology 67, 301–320 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    Candes, E.J., Li, X.D., Ma, Y., Wright, J.: Robust principal component analysis? Journal of the ACM 58(3) (2011)Google Scholar
  21. 21.
    Tikhonov, A.N.: Solution of incorrectly formulated problems and the regularization method (1963)Google Scholar
  22. 22.
    Hoerl, A.E., Kennard, R.W.: Ridge regression: Biased estimation for nonorthogonal problems, 12 (1970)Google Scholar
  23. 23.
    Shi, J.B., Malik, J.: Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 888–905 (2000)CrossRefGoogle Scholar
  24. 24.
    Lee, K.C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(5), 684–698 (2005)CrossRefGoogle Scholar
  25. 25.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2), 210–227 (2009)CrossRefGoogle Scholar
  26. 26.
    Sra, S., Nowozin, S., Wright, S.J. (eds.): Optimization for Machine Learning. The MIT Press (2010)Google Scholar
  27. 27.
    Golub, G.H., Loan, C.F.V.: Matrix computations, vol. 3. Johns Hopkins Univ. Pr. (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Can-Yi Lu
    • 1
    • 2
  • Hai Min
    • 1
  • Zhong-Qiu Zhao
    • 3
  • Lin Zhu
    • 1
  • De-Shuang Huang
    • 4
  • Shuicheng Yan
    • 2
  1. 1.Department of AutomationUniversity of Science and Technology of ChinaHefeiChina
  2. 2.Department of Electrical and Computer EngineeringNational University of SingaporeSingapore
  3. 3.School of Computer and InformationHefei University of TechnologyHefeiChina
  4. 4.School of Electronics and Information EngineeringTongji UniversityShanghaiChina

Personalised recommendations