Advertisement

Machine Learning

, Volume 108, Issue 12, pp 2035–2060 | Cite as

2D compressed learning: support matrix machine with bilinear random projections

  • Di Ma
  • Songcan ChenEmail author
Article
  • 283 Downloads

Abstract

Support matrix machine (SMM) is an efficient matrix classification method that can leverage the structure information within the matrix to improve the classification performance. However, its computational and storage costs are still expensive for high-dimensional data. To address these problems, in this paper, we consider a 2D compressed learning paradigm to learn the SMM classifier in some compressed data domain. Specifically, we use the Kronecker compressed sensing (KCS) to obtain the compressive measurements and learn the SMM classifier. We show that the Kronecker product measurement matrices used by KCS satisfies the restricted isometry property (RIP), which is a property to ensure the learnability of the compressed data. We further give a lower bound on the number of measurements required for KCS. Though this lower bound shows that KCS requires more measurements than the regular CS to satisfy the same RIP condition, KCS itself still enjoys lower computational and storage complexities. Then, using the RIP condition, we verify that the learned SMM classifier in the compressed domain can perform almost as well as the best linear classifier in the original uncompressed domain. Finally, our experimental results also demonstrate the feasibility of 2D compressed learning.

Keywords

2D compressed learning Bilinear random projection Dimension reduction Support matrix machine Kronecker compressed learning 

Notes

Acknowledgements

We would like to express our appreciation to the editors and the reviewers, who have greatly helped us in improving the quality of the paper. This work is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 61672281 and the Key Program of NSFC under Grant No. 61732006.

References

  1. Baraniuk, R., Davenport, M., DeVore, R., & Wakin, M. (2008). A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3), 253–263.MathSciNetCrossRefGoogle Scholar
  2. Bartlett, P. L., & Mendelson, S. (2002). Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3, 463–482.MathSciNetzbMATHGoogle Scholar
  3. Cai, J. F., Candès, E. J., & Shen, Z. (2010). A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4), 1956–1982.MathSciNetCrossRefGoogle Scholar
  4. Calderbank, R., Jafarpour, S., & Schapire, R. (2009). Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain. Technical report, Rice UniversityGoogle Scholar
  5. Candes, E. J., & Tao, T. (2006). Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12), 5406–5425.MathSciNetCrossRefGoogle Scholar
  6. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297.zbMATHGoogle Scholar
  7. Danziger, S. A., Swamidass, S. J., Zeng, J., Dearth, L. R., Lu, Q., Chen, J. H., et al. (2006). Functional census of mutation sequence spaces: The example of p53 cancer rescue mutants. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 3(2), 114–125.CrossRefGoogle Scholar
  8. Donoho, D. L. (2006). Compressed sensing. IEEE Transactions on Information Theory, 52(4), 1289–1306.MathSciNetCrossRefGoogle Scholar
  9. Duarte, M. F., & Baraniuk, R. G. (2012). Kronecker compressive sensing. IEEE Transactions on Image Processing, 21(2), 494–504.MathSciNetCrossRefGoogle Scholar
  10. Duarte, M. F., Davenport, M. A., Takhar, D., Laska, J. N., Sun, T., Kelly, K. E., et al. (2008). Single-pixel imaging via compressive sampling. IEEE Signal Processing Magazine, 25(2), 83.CrossRefGoogle Scholar
  11. Filannino, M. (2011). Dbworld e-mail classification using a very small corpus. The University of Manchester.Google Scholar
  12. Friedman, J., Hastie, T., & Tibshirani, R. (2001). The elements of statistical learning. Springer series in statistics (Vol. 1). New York: Springer.zbMATHGoogle Scholar
  13. Jokar, S., & Mehrmann, V. (2012). Sparse representation of solutions of kronecker product systems. MathematicsGoogle Scholar
  14. Luo, L., Xie, Y., Zhang, Z., & Li, W. J. (2015). Support matrix machines. In Proceedings of the 32nd international conference on machine learning (ICML-15) (pp. 938–947).Google Scholar
  15. Lustig, M., Donoho, D. L., Santos, J. M., & Pauly, J. M. (2008). Compressed sensing MRI. IEEE Signal Processing Magazine, 25(2), 72–82.CrossRefGoogle Scholar
  16. Maillard, O., & Munos, R. (2009). Compressed least-squares regression. In Advances in neural information processing systems (pp. 1213–1221).Google Scholar
  17. Reboredo, H., Renna, F., Calderbank, R., & Rodrigues, M. R. (2013). Compressive classification. In 2013 IEEE international symposium on information theory proceedings (ISIT) (pp. 674–678). IEEEGoogle Scholar
  18. Recht, B., Fazel, M., & Parrilo, P. A. (2010). Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3), 471–501.MathSciNetCrossRefGoogle Scholar
  19. Rish, I., & Grabarnik, G. (2014). Sparse modeling: Theory, algorithms, and applications. Boca Raton: CRC Press.CrossRefGoogle Scholar
  20. Thomaz, C. E., & Giraldi, G. A. (2010). A new ranking method for principal components analysis and its application to face image analysis. Image and Vision Computing, 28(6), 902–913.CrossRefGoogle Scholar
  21. Wang, Z., & Chen, S. (2007). New least squares support vector machines based on matrix patterns. Neural Processing Letters, 26(1), 41–56.MathSciNetCrossRefGoogle Scholar
  22. Wang, Z., Zhu, C., Gao, D., & Chen, S. (2013). Three-fold structured classifier design based on matrix pattern. Pattern Recognition, 46(6), 1532–1555.CrossRefGoogle Scholar
  23. Wolf, L., Jhuang, H., & Hazan, T. (2007). Modeling appearances with low-rank SVM. In IEEE conference on computer vision and pattern recognition, 2007. CVPR’07 (pp. 1–6). IEEEGoogle Scholar
  24. Zhou, H., & Li, L. (2014). Regularized matrix regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(2), 463–483.MathSciNetCrossRefGoogle Scholar
  25. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301–320.MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and TechnologyNanjing University of Aeronautics and AstronauticsNanjingChina

Personalised recommendations