Skip to main content

Scalable Deep k-Subspace Clustering

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11365))

Abstract

Subspace clustering algorithms are notorious for their scalability issues because building and processing large affinity matrices are demanding. In this paper, we introduce a method that simultaneously learns an embedding space along subspaces within it to minimize a notion of reconstruction error, thus addressing the problem of subspace clustering in an end-to-end learning paradigm. To achieve our goal, we propose a scheme to update subspaces within a deep neural network. This in turn frees us from the need of having an affinity matrix to perform clustering. Unlike previous attempts, our method can easily scale up to large datasets, making it unique in the context of unsupervised learning with deep architectures. Our experiments show that our method significantly improves the clustering accuracy while enjoying cheaper memory footprints.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Among all the datasets that have been tested, COIL100 with 7,200 images seems to be the largest one.

  2. 2.

    We assume \(p=p_i,\forall i\) in the remainder.

  3. 3.

    In our experiments, the number of parameters in SAE is 2600 times more than that of CAE.

References

  1. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv:1603.04467 (2016)

  2. Agarwal, P.K., Mustafa, N.H.: K-means projective clustering. In: Proceedings of the Twenty-Third ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 155–165. ACM (2004)

    Google Scholar 

  3. Balzano, L., Szlam, A., Recht, B., Nowak, R.: K-subspaces with missing data. In: 2012 IEEE Statistical Signal Processing Workshop (SSP), pp. 612–615. IEEE (2012)

    Google Scholar 

  4. Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS, pp. 153–160 (2007)

    Google Scholar 

  5. Bradley, P.S., Mangasarian, O.L.: K-plane clustering. J. Glob. Optim. 16(1), 23–32 (2000)

    Article  MathSciNet  Google Scholar 

  6. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR 2005, pp. 886–893. IEEE (2005)

    Google Scholar 

  7. Elhamifar, E., Vidal, R.: Sparse subspace clustering. In: CVPR, pp. 2790–2797 (2009)

    Google Scholar 

  8. Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans Pattern Anal. Mach. Intell. 35(11), 2765–2781 (2013)

    Article  Google Scholar 

  9. Ho, J., Yang, M.H., Lim, J., Lee, K.C., Kriegman, D.: Clustering appearances of objects under varying illumination conditions. In: CVPR, vol. 1, pp. 11–18. IEEE (2003)

    Google Scholar 

  10. Ji, P., Li, H., Salzmann, M., Dai, Y.: Robust motion segmentation with unknown correspondences. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 204–219. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10599-4_14

    Chapter  Google Scholar 

  11. Ji, P., Li, H., Salzmann, M., Zhong, Y.: Robust multi-body feature tracker: a segmentation-free approach. In: CVPR, pp. 3843–3851 (2016)

    Google Scholar 

  12. Ji, P., Reid, I., Garg, R., Li, H., Salzmann, M.: Low-rank kernel subspace clustering. arXiv preprint arXiv:1707.04974 (2017)

  13. Ji, P., Salzmann, M., Li, H.: Efficient dense subspace clustering. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 461–468. IEEE (2014)

    Google Scholar 

  14. Ji, P., Salzmann, M., Li, H.: Shape interaction matrix revisited and robustified: efficient subspace clustering with corrupted and incomplete data. In: ICCV, pp. 4687–4695 (2015)

    Google Scholar 

  15. Ji, P., Zhang, T., Li, H., Salzmann, M., Reid, I.: Deep subspace clustering networks. In: Advances in Neural Information Processing Systems, pp. 23–32 (2017)

    Google Scholar 

  16. Ji, P., Zhong, Y., Li, H., Salzmann, M.: Null space clustering with applications to motion segmentation and face clustering. In: ICIP, pp. 283–287. IEEE (2014)

    Google Scholar 

  17. Kanatani, K.: Motion segmentation by subspace separation and model selection. In: ICCV, vol. 2, pp. 586–591. IEEE (2001)

    Google Scholar 

  18. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)

  19. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  20. Lee, K.C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. TPAMI 27(5), 684–698 (2005)

    Article  Google Scholar 

  21. Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Trans Pattern Anal. Mach. Intell. 35(1), 171–184 (2013)

    Article  Google Scholar 

  22. Lloyd, S.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–137 (1982)

    Article  MathSciNet  Google Scholar 

  23. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. IJCV 60(2), 91–110 (2004)

    Article  Google Scholar 

  24. Ma, Y., Derksen, H., Hong, W., Wright, J.: Segmentation of multivariate mixed data via lossy data coding and compression. TPAMI 29(9), 1546–1562 (2007)

    Article  Google Scholar 

  25. Ng, A.Y., Jordan, M.I., Weiss, Y., et al.: On spectral clustering: analysis and an algorithm. In: NIPS, vol. 14, pp. 849–856 (2001)

    Google Scholar 

  26. Patel, V.M., Vidal, R.: Kernel sparse subspace clustering. In: ICIP, pp. 2849–2853. IEEE (2014)

    Google Scholar 

  27. Peng, X., Feng, J., Xiao, S., Lu, J., Yi, Z., Yan, S.: Deep sparse subspace clustering. arXiv preprint arXiv:1709.08374 (2017)

  28. Peng, X., Xiao, S., Feng, J., Yau, W.Y., Yi, Z.: Deep subspace clustering with sparsity prior. In: IJCAI (2016)

    Google Scholar 

  29. Shi, J., Malik, J.: Normalized cuts and image segmentation. TPAMI 22(8), 888–905 (2000)

    Article  Google Scholar 

  30. Tseng, P.: Nearest q-Flat to m points. J. Optim. Theory Appl. 105(1), 249–252 (2000)

    Article  MathSciNet  Google Scholar 

  31. Vidal, R.: Subspace clustering. IEEE Signal Process. Mag. 28(2), 52–68 (2011)

    Article  Google Scholar 

  32. Vidal, R., Favaro, P.: Low rank subspace clustering (LRSC). Pattern Recognit. Lett. 43, 47–61 (2014)

    Article  Google Scholar 

  33. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017)

    Google Scholar 

  34. Xiao, S., Tan, M., Xu, D., Dong, Z.Y.: Robust kernel low-rank representation. IEEE Trans. Neural Netw. Learn. Syst. 27(11), 2268–2281 (2016)

    Article  MathSciNet  Google Scholar 

  35. Xie, J., Girshick, R., Farhadi, A.: Unsupervised deep embedding for clustering analysis. In: International Conference on Machine Learning, pp. 478–487 (2016)

    Google Scholar 

  36. Yang, A.Y., Wright, J., Ma, Y., Sastry, S.S.: Unsupervised segmentation of natural images via lossy data compression. CVIU 110(2), 212–225 (2008)

    Google Scholar 

  37. Yang, B., Fu, X., Sidiropoulos, N.D., Hong, M.: Towards k-means-friendly spaces: simultaneous deep learning and clustering. In: ICML, pp. 3861–3870 (2017)

    Google Scholar 

  38. Yin, M., Guo, Y., Gao, J., He, Z., Xie, S.: Kernel sparse subspace clustering on symmetric positive definite manifolds. In: CVPR, pp. 5157–5164 (2016)

    Google Scholar 

  39. You, C., Li, C.G., Robinson, D.P., Vidal, R.: Oracle based active set algorithm for scalable elastic net subspace clustering. In: CVPR, pp. 3928–3937 (2016)

    Google Scholar 

  40. You, C., Robinson, D., Vidal, R.: Scalable sparse subspace clustering by orthogonal matching pursuit. In: CVPR, pp. 3918–3927 (2016)

    Google Scholar 

  41. Zhang, T., Szlam, A., Wang, Y., Lerman, G.: Hybrid linear modeling via local best-fit flats. Int. J. Comput. Vis. 100(3), 217–240 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgement

This research was supported by Australian Research Council (ARC) Discovery Projects funding scheme (project DP150104645), ARC through Laureate Fellowship FL130100102 to IDR and ARC of Excellence for Robotic Vision (project number CE140100016).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tong Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, T., Ji, P., Harandi, M., Hartley, R., Reid, I. (2019). Scalable Deep k-Subspace Clustering. In: Jawahar, C., Li, H., Mori, G., Schindler, K. (eds) Computer Vision – ACCV 2018. ACCV 2018. Lecture Notes in Computer Science(), vol 11365. Springer, Cham. https://doi.org/10.1007/978-3-030-20873-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-20873-8_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-20872-1

  • Online ISBN: 978-3-030-20873-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics