Advertisement

Introduction

Chapter
  • 871 Downloads
Part of the SpringerBriefs in Computer Science book series (BRIEFSCOMPUTER)

Abstract

Robust data classification or representation is a fundamental task and has a long history in computer vision. The algorithmic robustness, which is derived from the statistical definition of a breakdown point [49, 106], is the ability of an algorithm that tolerates a large amount of outliers. Therefore, a robust method should be effective enough to reject outliers in images and perform classification only on uncorrupted pixels. In the past decades, many works for subspace learning [37, 91] and sparse signal representation [101, 154] have been developed to obtain more robust image-based object recognition. Despite significant improvement, performing robust classification is still challenging due to the nature of unpredictable outliers in an image. Outliers may occupy any parts of an image and have arbitrarily large values in magnitude [155].

Keywords

Sparse Representation Nonnegative Matrix Factorization Breakdown Point Subspace Learning Robust Classification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 32.
    Daubechies, I., Defrise, M., De Mol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications in Pure and Applied Mathematics 57, 1413–1457 (2006)CrossRefGoogle Scholar
  2. 36.
    De la Torre, F., Black, M.: A framework for robust subspace learning. International Journal of Computer Vision 54(1–3), 117–142 (2003)CrossRefzbMATHGoogle Scholar
  3. 37.
    Ding, C., Zhou, D., He, X., Zha, H.: R1-pca: rotational invariant L1-norm principal component analysis for robust subspace factorization. In: Proceedings of International Conference on Machine LearningGoogle Scholar
  4. 45.
    Elhamifar, E., Vidal, R.: Sparse subspace clustering: Algorithm, theory, and applications. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35(11), 2765–2781 (2013)CrossRefGoogle Scholar
  5. 48.
    Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 337–350 (2006)CrossRefGoogle Scholar
  6. 49.
    Fidler, S., Skocaj, D., Leonardis, A.: Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(3), 337–350 (2006)CrossRefGoogle Scholar
  7. 59.
    He, R., Hu, B.G., Yuan, X., Zheng, W.S.: Principal component analysis based on nonparametric maximum entropy. Neurocomputing 73, 1840–1952 (2010)CrossRefGoogle Scholar
  8. 61.
    He, R., Sun, Z., Tan, T., Zheng, W.S.: Recovery of corrupted low-rank matrices via half-quadratic based nonconvex minimization. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2889–2896 (2011)Google Scholar
  9. 62.
    He, R., Tan, T., Wang, L.: Recovery of corrupted low-rank matrix by implicit regularizers. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(4), 770–783 (2014)CrossRefGoogle Scholar
  10. 63.
    He, R., Tan, T., Wang, L., Zheng, W.S.: 2, 1 regularized correntropy for robust feature selection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 2504–2511 (2012)Google Scholar
  11. 64.
    He, R., Zheng, W.S., Hu, B.G.: Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(8), 1561–1576 (2011)CrossRefGoogle Scholar
  12. 66.
    He, R., Zheng, W.S., Hu, B.G., Kong, X.W.: A regularized correntropy framework for robust pattern recognition. Neural Computation 23(8), 2074–2100 (2011)CrossRefzbMATHGoogle Scholar
  13. 81.
    Jenssen, R., Eltoft, T., Girolami, M., Erdogmus, D.: Kernel maximum entropy data transformation and an enhanced spectral clustering algorithm. In: Neural Information Processing Systems NIPS (2006)Google Scholar
  14. 83.
    Ji, Y., Lin, T., Zha, H.: Mahalanobis distance based non-negative sparse representation for face recognition. In: Proceedings of International Conference on Machine Learning and Applications, pp. 41–46 (2009)Google Scholar
  15. 91.
    Li, M., Chen, X., Li, X., Ma, B., Vitanyi, M.: The similarity metric. IEEE Transactions Information Theory 50, 3250–3264 (2004)CrossRefMathSciNetGoogle Scholar
  16. 97.
    Liu, R., Li, S.Z., Yuan, X., He, R.: Online determination of track loss using template inverse matching. In: International Workshop on Visual Surveillance-VS (2008)Google Scholar
  17. 99.
    Luenberger, D.: Optimization by vector space methods. Wiley (1969)Google Scholar
  18. 101.
    Mairal, J., Sapiro, G., Elad, M.: Learning multiscale sparse representations for image and video restoration. SIAM Multiscale Modeling & Simulation 7(1), 214–241 (2008)CrossRefzbMATHMathSciNetGoogle Scholar
  19. 106.
    Moulin, P., O’Sullivan, J.A.: Information-theoretic analysis of information hiding. IEEE Transactions on Information Theory 49(3), 563–593 (2003)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 108.
    Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(8), 1394–1407 (2007)CrossRefGoogle Scholar
  21. 110.
    Nikolova, M., NG, M.K.: Analysis of half-quadratic minimization methods for signal and image recovery. SIAM Journal on Scientific Computing 27(3), 937–966 (2005)Google Scholar
  22. 114.
    Nowak, R., Figueiredo, M.: Fast wavelet-based image deconvolution using the EM algorithm. In: Proceedings of Asilomar Conference on Signals, Systems, and Computers, vol. 1, pp. 371–375 (2001)Google Scholar
  23. 115.
    Parzen, E.: On the estimation of probability density function and the mode. The Annals of Mathematical Statistics 33, 1065–1076 (1962)CrossRefzbMATHMathSciNetGoogle Scholar
  24. 120.
    Pokharel, P.P., Liu, W., Principe, J.C.: A low complexity robust detector in impulsive noise. Signal Processing 89(10), 1902–1909 (2009)CrossRefzbMATHGoogle Scholar
  25. 122.
    Principe, J., Xu, D., Zhao, Q., Fisher, J.: Learning from examples with information-theoretic criteria. Journal of VLSI Signal Processing 26, 61–77 (2000)CrossRefzbMATHGoogle Scholar
  26. 125.
    P.Viola, N.Schraudolph, T.Sejnowski: Empirical entropy manipulation for real-world problems. In: Proceedings of Neural Information Processing Systems, pp. 851–857 (1995)Google Scholar
  27. 126.
    Rao, S., Liu, W., Principe, J.C., de Medeiros Martins, A.: Information theoretic mean shift algorithm. In: Machine Learning for Signal Processing (2006)Google Scholar
  28. 127.
    Renyi, A.: On measures of entropy and information. Selected Papers of Alfred Renyi 2, 565–580 (1976)Google Scholar
  29. 128.
    Rockfellar, R.: Convex analysis. Princeton Press (1970)Google Scholar
  30. 135.
    Sharma, A., Paliwal, K.: Fast principal component analysis using fixed-point algorithm. Pattern Recognition Letters 28, 1151–1155 (2007)CrossRefGoogle Scholar
  31. 136.
    Shi, Q., Eriksson, A., van den Hengel, A., Shen, C.: Face recognition really a compressive sensing problem. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 553–560 (2011)Google Scholar
  32. 140.
    Takhar, D., Laska, J., Wakin, M., Duarte, M., Baron, D., Sarvotham, S., Kelly, K.,, Baraniuk, R.: A new compressive imaging camera architecture using optical-domain compression. In: Proceedings of Computational Imaging IV at SPIE Electronic Imaging, pp. 43–52 (2006)Google Scholar
  33. 145.
    Vinh, N.X., Epps, J., Bailey, J.: Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research 11, 2837–2854 (2010)zbMATHMathSciNetGoogle Scholar
  34. 148.
    Weiszfeld, E.: Sur le point pour lequel la somme des distances de n points donnes est minimum. Mathematical Journal 43, 355–386 (1937)Google Scholar
  35. 154.
    Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning with application to clustering with side-information. In: Proceedings of Advances in Neural Information Processing Systems, vol. 15, pp. 505–512 (2002)Google Scholar
  36. 155.
    Xu, D.: Energy, entropy and information potential for neural computation. Ph.D. thesis, University of Florida (1999)Google Scholar
  37. 158.
    Yang, A.Y., Sastry, S.S., Ganesh, A., Ma, Y.: Fast 1-minimization algorithms and an application in robust face recognition: A review. In: Proceedings of International Conference on Image Processing (2010)Google Scholar
  38. 163.
    Yuan, X.T., Li, S.: Half quadratic analysis for mean shift: with extension to a sequential data mode-seeking method. In: IEEE International Conference on Computer Vision (2007)Google Scholar
  39. 165.
    Zhang, T.: Multi-stage convex relaxation for learning with sparse regularization. In: Proceedings of Neural Information Processing Systems, pp. 16–21 (2008)Google Scholar
  40. 166.
    Zhang, T.H., Tao, D.C., Li, X.L., Yang, J.: Patch alignment for dimensionality reduction. IEEE Trans. Knowl. Data Eng. 21(9), 1299–1313 (2009)CrossRefGoogle Scholar
  41. 170.
    Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1418–1429 (2006)CrossRefzbMATHMathSciNetGoogle Scholar
  42. 171.
    Zhang, Y., Sun, Z., He, R., Tan, T.: Robust subspace clustering via half-quadratic minimization. In: International Conference on Computer Vision (2013)Google Scholar

Copyright information

© The Author(s) 2014

Authors and Affiliations

  1. 1.National Laboratory of Pattern RecognitionInstitute of Automation Chinese Academy of SciencesBeijingChina
  2. 2.School of Information and ControlNanjing University of Information Science and TechnologyNanjingChina

Personalised recommendations