Advertisement

Final Words

  • René Vidal
  • Yi Ma
  • S. Shankar Sastry
Chapter
Part of the Interdisciplinary Applied Mathematics book series (IAM, volume 40)

Abstract

As we have stated from the very beginning of this book, the ultimate goal of our quest is to be able to effectively and efficiently extract low-dimensional structures in high-dimensional data. Our intention is for this book to serve as an introductory textbook for readers who are interested in modern data science and engineering, including both its mathematical and computational foundations as well as its applications. By using what is arguably the most basic and useful class of structures, i.e., linear subspaces, this book introduces some of the most fundamental geometrical, statistical, and optimization principles for data analysis. While these mathematical models and principles are classical and timeless, the problems and results presented in this book are rather modern and timely. Compared with classical methods for learning low-dimensional subspaces (such as PCA (Jolliffe 1986), the methods discussed in this book significantly enrich our data analysis arsenal with modern methods that are robust to imperfect data (due to uncontrolled data acquisition processes) and can handle mixed heterogenous structures in the data.

Keywords

Sparse Representation Subspace Cluster Cloud Computing Platform Robust Principal Component Analysis Nonlinear Manifold 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Arora, S., Bhaskara, A., Ge, R., & Ma, T. (2014). Provable bounds for learning some deep representations. In Proceedings of International Conference on Machine Learning.Google Scholar
  2. Bach, F. (2013). Convex relaxations of structured matrix factorizations. arXiv:1309.3117v1.Google Scholar
  3. Bach, F., Mairal, J., & Ponce, J. (2008). Convex sparse matrix factorizations. http://arxiv.org/abs/0812.1869 Google Scholar
  4. Baraniuk, R. (2007). Compressive sensing. IEEE Signal Processing Magazine, 24(4), 118–121.MathSciNetCrossRefGoogle Scholar
  5. Candès, E. (2006). Compressive sampling. In Proceedings of the International Congress of Mathematics.Google Scholar
  6. Candès, E., & Recht, B. (2011). Simple bounds for low-complexity model reconstruction. Mathematical Programming Series A, 141(1–2), 577–589.MATHGoogle Scholar
  7. Cetingül, H. E., Wright, M., Thompson, P., & Vidal, R. (2014). Segmentation of high angular resolution diffusion MRI using sparse riemannian manifold clustering. IEEE Transactions on Medical Imaging, 33(2), 301–317.CrossRefGoogle Scholar
  8. Deng, W., Lai, M.-J., Peng, Z., & Yin, W. (2013). Parallel multi-block admm with o(1/k) convergence. UCLA CAM.Google Scholar
  9. Elhamifar, E., Sapiro, G., & Vidal, R. (2012a). Finding exemplars from pairwise dissimilarities via simultaneous sparse recovery. In Neural Information Processing and Systems.Google Scholar
  10. Elhamifar, E., Sapiro, G., & Vidal, R. (2012b). See all by looking at a few: Sparse modeling for finding representative objects. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  11. Elhamifar, E., & Vidal, R. (2011). Sparse manifold clustering and embedding. In Neural Information Processing and Systems.Google Scholar
  12. Feng, J., Xu, H., Mannor, S., & Yang, S. (2013). Online PCA for contaminated data. In NIPS.Google Scholar
  13. Goh, A., & Vidal, R. (2007). Segmenting motions of different types by unsupervised manifold clustering. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  14. Goh, A., & Vidal, R. (2008). Unsupervised Riemannian clustering of probability density functions. In European Conference on Machine Learning.Google Scholar
  15. Haeffele, B., & Vidal, R. (2015). Global optimality in tensor factorization, deep learning, and beyond. Preprint, http://arxiv.org/abs/1506.07540.
  16. Haeffele, B., Young, E., & Vidal, R. (2014). Structured low-rank matrix factorization: Optimality, algorithm, and applications to image processing. In International Conference on Machine Learning.Google Scholar
  17. Haro, G., Randall, G., & Sapiro, G. (2006). Stratification learning: Detecting mixed density and dimensionality in high dimensional point clouds. In Neural Information Processing and Systems.Google Scholar
  18. Haro, G., Randall, G., & Sapiro, G. (2008). Translated poisson mixture model for stratification learning. International Journal of Computer Vision, 80(3), 358–374.CrossRefGoogle Scholar
  19. He, H., & Garcia, E. (2009). Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263–1284.CrossRefGoogle Scholar
  20. He, H., & Ma, Y. (2013). Imbalanced Learning: Foundations, Algorithms, and Applications. New York: Wiley.CrossRefMATHGoogle Scholar
  21. Hinton, G., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 1527–1554.MathSciNetCrossRefMATHGoogle Scholar
  22. Jarret, K., Kavukcuoglu, K., Ranzato, M., & LeCun, Y. (2009). What is the best multi-stage architecture for object recognition. In International Conference on Computer Vision.Google Scholar
  23. Jhuo, I.-H., Liu, D., Lee, D., & Chang, S.-F. (2012). Robust visual domain adaptation with low-rank reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2168–2175).Google Scholar
  24. Jolliffe, I. (1986). Principal Component Analysis. New York: Springer.CrossRefMATHGoogle Scholar
  25. Negahban, S., Ravikumar, P., Wainwright, M., & Yu, B. (2010). A unified framework for analyzing m-estimators with decomposible regularizers. Available at http://arxiv.org/abs/1010.2731v1.MATHGoogle Scholar
  26. Patel, V. M., Gopalan, R., Li, R., & Chellappa, R. (2014). Visual domain adaptation: A survey of recent advances. IEEE Signal Processing Magazine, 32(3), 53–69.CrossRefGoogle Scholar
  27. Peng, Z., Yan, M., & Yin, W. (2013). Parallel and distributed sparse optimization. In Asilomar.Google Scholar
  28. Polito, M., & Perona, P. (2002). Grouping and dimensionality reduction by locally linear embedding. In Proceedings of Neural Information Processing Systems (NIPS).Google Scholar
  29. Qiu, Q., Patel, V. M., Turaga, P., & Chellappa, R. (2012). Domain adaptive dictionary learning. In European Conference on Computer Vision (Vol. 7575, pp. 631–645).Google Scholar
  30. Shekhar, S., Patel, V. M., Nguyen, H. V., & Chellappa, R. (2013). Generalized domain-adaptive dictionaries. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  31. Souvenir, R., & Pless, R. (2005). Manifold clustering. In Proceedings of International Conference on Computer Vision (Vol. I, pp. 648–653).Google Scholar
  32. Spielman, D., Wang, H., & Wright, J. (2012). Exact recovery of sparsity-used dictionaries. Conference on Learning Theory (COLT).Google Scholar
  33. Sun, J., Qu, Q., & Wright, J. (2015). Complete dictionary recovery over the sphere. Preprint. http://arxiv.org/abs/1504.06785 CrossRefGoogle Scholar
  34. Udell, M., Horn, C., Zadeh, R., & Boyd, S. (2015). Generalized low rank models. Working manuscript.Google Scholar
  35. Vidal, R. (2008). Recursive identification of switched ARX systems. Automatica, 44(9), 2274–2287.MathSciNetCrossRefMATHGoogle Scholar
  36. Zhang, K., Zhang, L., & Yang, M. (2014). Fast compressive tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(10).Google Scholar

Copyright information

© Springer-Verlag New York 2016

Authors and Affiliations

  • René Vidal
    • 1
  • Yi Ma
    • 2
  • S. Shankar Sastry
    • 3
  1. 1.Center for Imaging Science Department of Biomedical EngineeringJohns Hopkins UniversityBaltimoreUSA
  2. 2.School of Information Science and Technology ShanghaiTech UniversityShanghaiChina
  3. 3.Department of Electrical Engineering and Computer ScienceUniversity of California BerkeleyBerkeleyUSA

Personalised recommendations