Cybernetics and Systems Analysis

, Volume 48, Issue 4, pp 621–635 | Cite as

A randomized method for solving discrete ill-posed problems

New means of cybernetics, informatics, computer engineering, and systems analysis


An approach is proposed to the stable solution of discrete ill-posed problems on the basis of a combination of random projection of the initial ill-conditioned matrix with an ill-defined numerical rank and the pseudo-inversion of the resultant matrix. To select the dimension of the projection matrix, we propose to use criteria for the selection of a model and a regularization parameter. The results of experimental studies based on the well-known examples of discrete ill-posed problems are presented. Their solution errors are close to the Tikhonov regularization error, but a matrix dimension reduction owing to projection reduces the expenditures for computations, especially at high noise levels.


discrete ill-posed problem random projection regularization 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    P. C. Hansen, Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion, SIAM, Philadelphia (1998).CrossRefGoogle Scholar
  2. 2.
    Yu. L. Zabulonov, Yu. M. Korostil, and E. G. Revunova, “Optimization of inverse problem solution to obtain the distribution density function for surface contaminations,” Modeling and Information Technologies, 39, 77–83 (2006).Google Scholar
  3. 3.
    A. N. Tikhonov and V. Ya. Arsenin, Methods for Solving Ill-Posed Problems [in Russian], Nauka, Moscow (1979).Google Scholar
  4. 4.
    V. A. Morozov, Regular Methods for Solving Ill-Posed Problems [in Russian], Nauka, Moscow (1987).Google Scholar
  5. 5.
    H. W. Engl, M. Hanke, and A. Neubaer, Regularization of Inverse Problems, Kluwer, Dordrecht (2000).Google Scholar
  6. 6.
    I. S. Misuno, D. A. Rachkovskij, and S. V. Slipchenko, “Vector and distributed representations reflecting semantic relatedness of words,” Math. Machy i Systemy, No. 3, 50–67 (2005).Google Scholar
  7. 7.
    D. A. Rachkovskij, I. S. Misuno, and S. V. Slipchenko, “Randomized projective methods for the construction of binary sparse vector representations,” Cybernetics and Systems Analysis, Vol. 48, No. 1, 140–150 (2012).CrossRefGoogle Scholar
  8. 8.
    E. G. Revunova and D. A. Rachkovskij, “Increasing the accuracy of solving an inverse problem using random projections,” in: 15th Intern. Conf. “Knowledge-Dialogue-Solution” (2009), pp. 95–102.Google Scholar
  9. 9.
    E. G. Revunova, “Study of error components for solving inverse problems using random projections,” Mat. Mashiny i Sistemy, No. 4, 33–42 (2010).Google Scholar
  10. 10.
    P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarlos, “Faster least squares approximation,” Tech. Rep., arXiv: 0710.1435 (2007).Google Scholar
  11. 11.
    P. C. Hansen and D. P. O’Leary, “The use of L-curve in the regularization of discrete ill-posed problems,” SIAM J. Sci. Comput., 14, 1487–1503 (1993).MathSciNetMATHCrossRefGoogle Scholar
  12. 12.
    G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter,” Technometrics, Vol. 21, No. 2, 215–223 (1979).MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    N. Halko, P. G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions,” ACM Rep. Caltech, No. 5, 1–82 (2009).Google Scholar
  14. 14.
    C. Boutsidis, P. Drineas, and M. W. Mahoney, “An improved approximation algorithm for the column subset selection problem,” in: Proc. 41st Ann. ACM Symp. on Theory of Computing (STOC’09) (2009), pp. 968–977.Google Scholar
  15. 15.
    P. Drineas, M. W. Mahoney, and S. Muthukrishnan, “Sampling algorithms for L2 regression and applications,” in: Proc. 17th ACM-SIAM Symp. on Discrete Algorithms (SODA) (2006), pp. 1127–1136.Google Scholar
  16. 16.
    W. B. Johnson and J. Lindenstrauss, “Extensions of Lipschitz mappings into a Hilbert space,” Contemporary Mathematics, No. 26, 189–206 (1984).Google Scholar
  17. 17.
    C. H. Papadimitriou, P. Raghavan, H. Tamaki, and S. Vempala, “Latent semantic indexing: A probabilistic analysis,” J. Comput. System Sci., Vol. 61, No. 2, 217–235 (2000).MathSciNetMATHCrossRefGoogle Scholar
  18. 18.
    D. Achlioptas, “Database-friendly random projections: Johnson–Lindenstrauss with binary coins,” J. Comput. System Sci., 66, No. 4, 671–687 (2003).MathSciNetMATHCrossRefGoogle Scholar
  19. 19.
    P. Li, T. J. Hastie, and K. W. Church, “Very sparse random projections,” in: 12th ACM SIGKDD Intern. Conf. on Knowledge Discovery and Data Mining, ACM Press, Philadelphia (2006), pp. 287–296.CrossRefGoogle Scholar
  20. 20.
    T. Sarlos, “Improved approximation algorithms for large matrices via random projections,” in: Proc. 47th Annual IEEE Symp. on Foundations of Computer Sci. (2006), pp. 143–152.Google Scholar
  21. 21.
    V. Rokhlin and M. Tygert, “A fast randomized algorithm for overdetermined linear least-squares regression,” PNAS, 105(36), 13212–13217 (2008).MathSciNetMATHCrossRefGoogle Scholar
  22. 22.
    M. Tygert, “A fast algorithm for computing minimal-norm solutions to underdetermined systems of linear equations,” Tech. Rep. N09-48, arXiv: 0905.4745, May 2009 (2009).Google Scholar
  23. 23.
    P. Niyogi and F. Girosi, “On the relationship between generalization error, hypothesis complexity, and sample complexity for radial basis functions,” Neural Comput., 8, 819–842 (1996).CrossRefGoogle Scholar
  24. 24.
    P. C. Hansen, “Regularization tools: A Matlab package for analysis and solution of discrete ill-posed problems,” Numer. Algorithms, 6, 1–35 (1994).MathSciNetMATHCrossRefGoogle Scholar
  25. 25.
    H. Akaike, “A new look at the statistical model identification,” IEEE Trans. Automatic Control, Vol. 19, No. 6, 716–723 (1974).MathSciNetMATHCrossRefGoogle Scholar
  26. 26.
    C. L. Mallows, “Some comments on Cp,” Technometrics, 15, No. 4, 661–675 (1973).MATHGoogle Scholar
  27. 27.
    M. Hansen and B. Yu, “Model selection and minimum description length principle,” J. Amer. Statist. Assoc., 96, 746–774 (2001).MathSciNetMATHCrossRefGoogle Scholar
  28. 28.
    M. Sugiyama and H. Ogawa, “Subspace information criterion for model selection,” Neural Comput., Vol. 13, No. 8, 1863–1889 (2001).MATHCrossRefGoogle Scholar
  29. 29.
    J. Rissanen, Lectures on Statistical Modeling Theory, Univ. of Technology, Tampere (2002),
  30. 30.
    D. A. Rachkovskij and E. G. Revunova, “Intelligent gamma-ray data processing for environmental monitoring,” in: Intelligent Data Processing in Global Monitoring for Environment and Security, ITHEA, Kiev–Sofia (2011), pp. 136–157.Google Scholar
  31. 31.
    Yu. L. Zabulonov, G. V. Lisichenko, and E. G. Revunova, “Stochastic regularization of the inverse problem of recovering parameters of the radioactivity field according to aerial gamma-ray survey data,” in: Collected Scientific Papers of IMEE of NAS of Ukraine “Problems of Modeling in Energy Engineering,” No. 52, 89–96 (2009).Google Scholar
  32. 32.
    M. Belge, M. E. Kilmer, and E. L. Miller, “Efficient determination of multiple regularization parameters in a generalized L-curve framework,” Inverse Problems, 18, 1161–1183 (2002).MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, Inc. 2012

Authors and Affiliations

  1. 1.International Scientific-Educational Center of Information Technologies and SystemsNational Academy of Sciences and Ministry of Education and Science, Youth and Sports of UkraineKyivUkraine

Personalised recommendations