Skip to main content

Part of the book series: Springer Theses ((Springer Theses))

  • 735 Accesses

Abstract

This chapter provides an overview of the whole dissertation. First, we will generally review the history of computational models and visual information processing and indicate the irresistible trend of their marriage in this big data era. After introducing the low-quality properties in visual data, it will be apparent why computational methods provide an effective way to cope with these defects in visual information processing. Then, four different visual structure learning models, i.e., sparse learning, low-rank learning, graph learning, and information-theoretic learning, will be generally reviewed from both the theoretic and practical aspects. Concentrating on these four kinds of structural models for visual information computation, the outlines and contributions of the dissertation will be discussed.

Parts of this chapter are reproduced from [1] with permission number 3383111101772 \(@\) Springer.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deng Y, Dai Q, Zhang Z (2013) An overview of computational sparse models and their applications in artificial intelligence. In: Artificial intelligence, evolutionary computing and metaheuristics. Springer, Berlin, pp 345–369

    Google Scholar 

  2. Turing A (1937) On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc 2:230

    Article  MathSciNet  Google Scholar 

  3. Chen G, Tang J, Leng S (2008) Prior image constrained compressed sensing (piccs): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phys 35:660

    Article  Google Scholar 

  4. Kong Y, Wang D, Shi L, Hui SCN, Chu WCW (2014) Adaptive distance metric learning for diffusion tensor image segmentation. PLoS ONE 9(3):e92069. Available at http://dx.doi.org/10.1371%2Fjournal.pone.0092069

  5. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  MATH  Google Scholar 

  6. Niyogi X (2004) Locality preserving projections. In: Advances in neural information processing systems. Proceedings of the 2003 conference, vol 16. MIT Press, Cambridge, p 153

    Google Scholar 

  7. Deng Y, Dai Q, Wang R, Zhang Z (2012) Commute time guided transformation for feature extraction. Comput Vis Image Underst 116(4):473–483. Available at http://www.sciencedirect.com/science/article/pii/S1077314211002578

  8. Deng Y, Liu Y, Dai Q, Zhang Z, Wang Y (2012) Noisy depth maps fusion for multiview stereo via matrix completion. IEEE J Sel Top Sign Process 6(5):566–582

    Google Scholar 

  9. Donoho D (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306

    Article  MATH  MathSciNet  Google Scholar 

  10. Candès E (2008) The restricted isometry property and its implications for compressed sensing. C R Math 346(9–10):589–592

    Article  MATH  Google Scholar 

  11. Meinshausen N, Bühlmann P (2006) High-dimensional graphs and variable selection with the lasso. Ann Stat 34(3):1436–1462

    Article  MATH  Google Scholar 

  12. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc B (Methodological) 58(1):267–288. Available at http://www.jstor.org/stable/2346178

  13. Tipping M (2001) Sparse bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244

    MATH  MathSciNet  Google Scholar 

  14. Fazel M (2002) Matrix rank minimization with applications. PhD thesis, Stanford University

    Google Scholar 

  15. Candes E, Plan Y (2010) Matrix completion with noise. Proc IEEE 98(6):925–936

    Article  Google Scholar 

  16. Deng Y, Dai Q, Liu R, Zhang Z, Hu S (2013) Low-rank structure learning via nonconvex heuristic recovery. IEEE Trans Neural Networks Learn Syst 24(3):383–396

    Article  Google Scholar 

  17. Candes EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 59(3): 1–37

    Google Scholar 

  18. Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: International conference on machine learning, 2010, pp 663–670

    Google Scholar 

  19. Deng Y, Dai Q, Zhang Z (2011) Graph Laplace for occluded face completion and recognition. IEEE Trans Image Process 99:1–1

    MathSciNet  Google Scholar 

  20. Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905

    Article  Google Scholar 

  21. Tenenbaum J, De Silva V, Langford J (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323

    Article  Google Scholar 

  22. Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51

    Article  Google Scholar 

  23. Deng Y, Li Y, Qian Y, Ji X, Dai Q (2014) Visual words assignment via information-theoretic manifold embedding. IEEE Trans Cybern. doi:10.1109/TCYB.2014.2300192

  24. Yang J-B, Ong C-J (2012) An effective feature selection method via mutual information estimation. IEEE Trans Syst Man Cybern B Cybern 42(6):1550–1559

    Google Scholar 

  25. Deng Y, Zhao Y, Liu Y, Dai Q (2013) Differences help recognition: a probabilistic interpretation. PLoS ONE 8(6):e63385

    Article  Google Scholar 

  26. Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238

    Article  Google Scholar 

  27. Davis J, Kulis B, Jain P, Sra S, Dhillon I (2007) Information-theoretic metric learning. In Proceedings of the 24th international conference on machine learning. ACM 2007, pp 209–216

    Google Scholar 

  28. Lazebnik S, Raginsky M (2009) Supervised learning of quantizer codebooks by information loss minimization. IEEE Trans Pattern Anal Mach Intell 31(7):1294–1309

    Article  Google Scholar 

  29. Si S, Tao D, Geng B (2010) Bregman divergence-based regularization for transfer subspace learning. IEEE Trans Knowl Data Eng 22:929–942

    Article  Google Scholar 

  30. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  31. Xing EP, Jordan MI, Russell S, Ng A (2002) Distance metric learning with application to clustering with side-information. In: Advances in neural information processing systems, pp 505–512

    Google Scholar 

  32. Erdogmus D, Hild Ii KE, Principe JC (2002) Blind source separation using Renyi’s¡ i¿ \(\alpha \)¡/i¿-marginal entropies. Neurocomputing 49(1):25–38

    Google Scholar 

  33. Torkkola K (2003) Feature extraction by non parametric mutual information maximization. J Mach Learn Res 3:1415–1438

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Deng .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Deng, Y. (2015). Introduction. In: High-Dimensional and Low-Quality Visual Information Processing. Springer Theses. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44526-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-44526-6_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-662-44525-9

  • Online ISBN: 978-3-662-44526-6

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics