Abstract
This chapter provides an overview of the whole dissertation. First, we will generally review the history of computational models and visual information processing and indicate the irresistible trend of their marriage in this big data era. After introducing the low-quality properties in visual data, it will be apparent why computational methods provide an effective way to cope with these defects in visual information processing. Then, four different visual structure learning models, i.e., sparse learning, low-rank learning, graph learning, and information-theoretic learning, will be generally reviewed from both the theoretic and practical aspects. Concentrating on these four kinds of structural models for visual information computation, the outlines and contributions of the dissertation will be discussed.
Parts of this chapter are reproduced from [1] with permission number 3383111101772 \(@\) Springer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Deng Y, Dai Q, Zhang Z (2013) An overview of computational sparse models and their applications in artificial intelligence. In: Artificial intelligence, evolutionary computing and metaheuristics. Springer, Berlin, pp 345–369
Turing A (1937) On computable numbers, with an application to the Entscheidungsproblem. Proc Lond Math Soc 2:230
Chen G, Tang J, Leng S (2008) Prior image constrained compressed sensing (piccs): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med Phys 35:660
Kong Y, Wang D, Shi L, Hui SCN, Chu WCW (2014) Adaptive distance metric learning for diffusion tensor image segmentation. PLoS ONE 9(3):e92069. Available at http://dx.doi.org/10.1371%2Fjournal.pone.0092069
Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396
Niyogi X (2004) Locality preserving projections. In: Advances in neural information processing systems. Proceedings of the 2003 conference, vol 16. MIT Press, Cambridge, p 153
Deng Y, Dai Q, Wang R, Zhang Z (2012) Commute time guided transformation for feature extraction. Comput Vis Image Underst 116(4):473–483. Available at http://www.sciencedirect.com/science/article/pii/S1077314211002578
Deng Y, Liu Y, Dai Q, Zhang Z, Wang Y (2012) Noisy depth maps fusion for multiview stereo via matrix completion. IEEE J Sel Top Sign Process 6(5):566–582
Donoho D (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306
Candès E (2008) The restricted isometry property and its implications for compressed sensing. C R Math 346(9–10):589–592
Meinshausen N, Bühlmann P (2006) High-dimensional graphs and variable selection with the lasso. Ann Stat 34(3):1436–1462
Tibshirani R (1996) Regression shrinkage and selection via the lasso. J Roy Stat Soc B (Methodological) 58(1):267–288. Available at http://www.jstor.org/stable/2346178
Tipping M (2001) Sparse bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244
Fazel M (2002) Matrix rank minimization with applications. PhD thesis, Stanford University
Candes E, Plan Y (2010) Matrix completion with noise. Proc IEEE 98(6):925–936
Deng Y, Dai Q, Liu R, Zhang Z, Hu S (2013) Low-rank structure learning via nonconvex heuristic recovery. IEEE Trans Neural Networks Learn Syst 24(3):383–396
Candes EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 59(3): 1–37
Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: International conference on machine learning, 2010, pp 663–670
Deng Y, Dai Q, Zhang Z (2011) Graph Laplace for occluded face completion and recognition. IEEE Trans Image Process 99:1–1
Shi J, Malik J (2000) Normalized cuts and image segmentation. IEEE Trans Pattern Anal Mach Intell 22(8):888–905
Tenenbaum J, De Silva V, Langford J (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500):2319–2323
Yan S, Xu D, Zhang B, Zhang H, Yang Q, Lin S (2007) Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1):40–51
Deng Y, Li Y, Qian Y, Ji X, Dai Q (2014) Visual words assignment via information-theoretic manifold embedding. IEEE Trans Cybern. doi:10.1109/TCYB.2014.2300192
Yang J-B, Ong C-J (2012) An effective feature selection method via mutual information estimation. IEEE Trans Syst Man Cybern B Cybern 42(6):1550–1559
Deng Y, Zhao Y, Liu Y, Dai Q (2013) Differences help recognition: a probabilistic interpretation. PLoS ONE 8(6):e63385
Peng H, Long F, Ding C (2005) Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans Pattern Anal Mach Intell 27(8):1226–1238
Davis J, Kulis B, Jain P, Sra S, Dhillon I (2007) Information-theoretic metric learning. In Proceedings of the 24th international conference on machine learning. ACM 2007, pp 209–216
Lazebnik S, Raginsky M (2009) Supervised learning of quantizer codebooks by information loss minimization. IEEE Trans Pattern Anal Mach Intell 31(7):1294–1309
Si S, Tao D, Geng B (2010) Bregman divergence-based regularization for transfer subspace learning. IEEE Trans Knowl Data Eng 22:929–942
Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182
Xing EP, Jordan MI, Russell S, Ng A (2002) Distance metric learning with application to clustering with side-information. In: Advances in neural information processing systems, pp 505–512
Erdogmus D, Hild Ii KE, Principe JC (2002) Blind source separation using Renyi’s¡ i¿ \(\alpha \)¡/i¿-marginal entropies. Neurocomputing 49(1):25–38
Torkkola K (2003) Feature extraction by non parametric mutual information maximization. J Mach Learn Res 3:1415–1438
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Deng, Y. (2015). Introduction. In: High-Dimensional and Low-Quality Visual Information Processing. Springer Theses. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44526-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-662-44526-6_1
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44525-9
Online ISBN: 978-3-662-44526-6
eBook Packages: EngineeringEngineering (R0)