Skip to main content

Part of the book series: SpringerBriefs in Computer Science ((BRIEFSCOMPUTER))

  • 986 Accesses

Abstract

Robust data classification or representation is a fundamental task and has a long history in computer vision. The algorithmic robustness, which is derived from the statistical definition of a breakdown point [49, 106], is the ability of an algorithm that tolerates a large amount of outliers. Therefore, a robust method should be effective enough to reject outliers in images and perform classification only on uncorrupted pixels. In the past decades, many works for subspace learning [37, 91] and sparse signal representation [101, 154] have been developed to obtain more robust image-based object recognition. Despite significant improvement, performing robust classification is still challenging due to the nature of unpredictable outliers in an image. Outliers may occupy any parts of an image and have arbitrarily large values in magnitude [155].

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications in Pure and Applied Mathematics 57, 1413–1457 (2006)

    Article  Google Scholar 

  2. De la Torre, F., Black, M.: A framework for robust subspace learning. International Journal of Computer Vision 54(1–3), 117–142 (2003)

    Article  MATH  Google Scholar 

  3. Ding, C., Zhou, D., He, X., Zha, H.: R1-pca: rotational invariant L1-norm principal component analysis for robust subspace factorization. In: Proceedings of International Conference on Machine Learning

    Google Scholar 

  4. Elhamifar, E., Vidal, R.: Sparse subspace clustering: Algorithm, theory, and applications. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(11), 2765–2781 (2013)

    Article  Google Scholar 

  5. Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 337–350 (2006)

    Article  Google Scholar 

  6. Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 337–350 (2006)

    Article  Google Scholar 

  7. He, R., Hu, B.G., Yuan, X., Zheng, W.S.: Principal component analysis based on nonparametric maximum entropy. Neurocomputing 73, 1840–1952 (2010)

    Article  Google Scholar 

  8. He, R., Sun, Z., Tan, T., Zheng, W.S.: Recovery of corrupted low-rank matrices via half-quadratic based nonconvex minimization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2889–2896 (2011)

    Google Scholar 

  9. He, R., Tan, T., Wang, L.: Recovery of corrupted low-rank matrix by implicit regularizers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 770–783 (2014)

    Article  Google Scholar 

  10. He, R., Tan, T., Wang, L., Zheng, W.S.: 2, 1 regularized correntropy for robust feature selection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2504–2511 (2012)

    Google Scholar 

  11. He, R., Zheng, W.S., Hu, B.G.: Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(8), 1561–1576 (2011)

    Article  Google Scholar 

  12. He, R., Zheng, W.S., Hu, B.G., Kong, X.W.: A regularized correntropy framework for robust pattern recognition. Neural Computation 23(8), 2074–2100 (2011)

    Article  MATH  Google Scholar 

  13. Jenssen, R., Eltoft, T., Girolami, M., Erdogmus, D.: Kernel maximum entropy data transformation and an enhanced spectral clustering algorithm. In: Neural Information Processing Systems NIPS (2006)

    Google Scholar 

  14. Ji, Y., Lin, T., Zha, H.: Mahalanobis distance based non-negative sparse representation for face recognition. In: Proceedings of International Conference on Machine Learning and Applications, pp. 41–46 (2009)

    Google Scholar 

  15. Li, M., Chen, X., Li, X., Ma, B., Vitanyi, M.: The similarity metric. IEEE Transactions Information Theory 50, 3250–3264 (2004)

    Article  MathSciNet  Google Scholar 

  16. Liu, R., Li, S.Z., Yuan, X., He, R.: Online determination of track loss using template inverse matching. In: International Workshop on Visual Surveillance-VS (2008)

    Google Scholar 

  17. Luenberger, D.: Optimization by vector space methods. Wiley (1969)

    Google Scholar 

  18. Mairal, J., Sapiro, G., Elad, M.: Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Modeling & Simulation 7(1), 214–241 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  19. Moulin, P., O’Sullivan, J.A.: Information-theoretic analysis of information hiding. IEEE Transactions on Information Theory 49(3), 563–593 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  20. Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(8), 1394–1407 (2007)

    Article  Google Scholar 

  21. Nikolova, M., NG, M.K.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM Journal on Scientific Computing 27(3), 937–966 (2005)

    Google Scholar 

  22. Nowak, R., Figueiredo, M.: Fast wavelet-based image deconvolution using the EM algorithm. In: Proceedings of Asilomar Conference on Signals, Systems, and Computers, vol. 1, pp. 371–375 (2001)

    Google Scholar 

  23. Parzen, E.: On the estimation of probability density function and the mode. The Annals of Mathematical Statistics 33, 1065–1076 (1962)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pokharel, P.P., Liu, W., Principe, J.C.: A low complexity robust detector in impulsive noise. Signal Processing 89(10), 1902–1909 (2009)

    Article  MATH  Google Scholar 

  25. Principe, J., Xu, D., Zhao, Q., Fisher, J.: Learning from examples with information-theoretic criteria. Journal of VLSI Signal Processing 26, 61–77 (2000)

    Article  MATH  Google Scholar 

  26. P.Viola, N.Schraudolph, T.Sejnowski: Empirical entropy manipulation for real-world problems. In: Proceedings of Neural Information Processing Systems, pp. 851–857 (1995)

    Google Scholar 

  27. Rao, S., Liu, W., Principe, J.C., de Medeiros Martins, A.: Information theoretic mean shift algorithm. In: Machine Learning for Signal Processing (2006)

    Google Scholar 

  28. Renyi, A.: On measures of entropy and information. Selected Papers of Alfred Renyi 2, 565–580 (1976)

    Google Scholar 

  29. Rockfellar, R.: Convex analysis. Princeton Press (1970)

    Google Scholar 

  30. Sharma, A., Paliwal, K.: Fast principal component analysis using fixed-point algorithm. Pattern Recognition Letters 28, 1151–1155 (2007)

    Article  Google Scholar 

  31. Shi, Q., Eriksson, A., van den Hengel, A., Shen, C.: Face recognition really a compressive sensing problem. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 553–560 (2011)

    Google Scholar 

  32. Takhar, D., Laska, J., Wakin, M., Duarte, M., Baron, D., Sarvotham, S., Kelly, K.,, Baraniuk, R.: A new compressive imaging camera architecture using optical-domain compression. In: Proceedings of Computational Imaging IV at SPIE Electronic Imaging, pp. 43–52 (2006)

    Google Scholar 

  33. Vinh, N.X., Epps, J., Bailey, J.: Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research 11, 2837–2854 (2010)

    MATH  MathSciNet  Google Scholar 

  34. Weiszfeld, E.: Sur le point pour lequel la somme des distances de n points donnes est minimum. Mathematical Journal 43, 355–386 (1937)

    Google Scholar 

  35. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning with application to clustering with side-information. In: Proceedings of Advances in Neural Information Processing Systems, vol. 15, pp. 505–512 (2002)

    Google Scholar 

  36. Xu, D.: Energy, entropy and information potential for neural computation. Ph.D. thesis, University of Florida (1999)

    Google Scholar 

  37. Yang, A.Y., Sastry, S.S., Ganesh, A., Ma, Y.: Fast 1-minimization algorithms and an application in robust face recognition: A review. In: Proceedings of International Conference on Image Processing (2010)

    Google Scholar 

  38. Yuan, X.T., Li, S.: Half quadratic analysis for mean shift: with extension to a sequential data mode-seeking method. In: IEEE International Conference on Computer Vision (2007)

    Google Scholar 

  39. Zhang, T.: Multi-stage convex relaxation for learning with sparse regularization. In: Proceedings of Neural Information Processing Systems, pp. 16–21 (2008)

    Google Scholar 

  40. Zhang, T.H., Tao, D.C., Li, X.L., Yang, J.: Patch alignment for dimensionality reduction. IEEE Trans. Knowl. Data Eng. 21(9), 1299–1313 (2009)

    Article  Google Scholar 

  41. Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1418–1429 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  42. Zhang, Y., Sun, Z., He, R., Tan, T.: Robust subspace clustering via half-quadratic minimization. In: International Conference on Computer Vision (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2014 The Author(s)

About this chapter

Cite this chapter

He, R., Hu, B., Yuan, X., Wang, L. (2014). Introduction. In: Robust Recognition via Information Theoretic Learning. SpringerBriefs in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-07416-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-07416-0_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-07415-3

  • Online ISBN: 978-3-319-07416-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics