Advertisement

An Analysis of the Superiorization Method via the Principle of Concentration of Measure

  • Yair CensorEmail author
  • Eliahu Levy
Article
  • 28 Downloads

Abstract

The superiorization methodology is intended to work with input data of constrained minimization problems, i.e., a target function and a constraints set. However, it is based on an antipodal way of thinking to the thinking that leads constrained minimization methods. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce (not necessarily minimize) target function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. A guarantee that the local target function reduction steps properly accumulate to a global target function value reduction is still missing in spite of an ever-growing body of publications that supply evidence of the success of the superiorization method in various problems. We propose an analysis based on the principle of concentration of measure that attempts to alleviate this guarantee question of the superiorization method.

Keywords

Superiorization Perturbation resilience Feasibility-seeking algorithm Target function reduction Concentration of measure Superiorization matrix Linear superiorization Hilbert-Schmidt norm Random matrix 

Notes

Acknowledgements

We thank two anonymous reviewers for their constructive comments. This work was supported by research grant no. 2013003 of the United States-Israel Binational Science Foundation (BSF) and by the ISF-NSFC joint research program grant No. 2874/19.

Compliance with Ethical Standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38, 367–426 (1996)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd edn. Springer, New York (2017)CrossRefGoogle Scholar
  3. 3.
    Behrends, E.: Introduction to Markov Chains. Springer, New York (2000)CrossRefGoogle Scholar
  4. 4.
    Bell, J.: Trace class operators and Hilbert-Schmidt operators, Technical report, April 18, (2016), 26pp. Available on Semantic Scholar at https://www.semanticscholar.org/
  5. 5.
    Butnariu, D., Davidi, R., Herman, G.T., Kazantsev, I.G.: Stable convergence behavior under summable perturbations of a class of projection methods for convex feasibility and optimization problems. IEEE J. Sel. Top. Signal Process. 1, 540–547 (2007)CrossRefGoogle Scholar
  6. 6.
    Butnariu, D., Reich,S., Zaslavski, A.J.: Convergence to fixed points of inexact orbits of Bregman-monotone and of nonexpansive operators in Banach spaces, in: H.F. Nathansky, B.G. de Buen, K. Goebel, W.A. Kirk, and B. Sims (Editors), Fixed Point Theory and its Applications, (Conference Proceedings, Guanajuato, Mexico, 2005), Yokahama Publishers, Yokahama, Japan, pp. 11–32, (2006). http://www.ybook.co.jp/pub/ISBN%20978-4-9465525-0.htm
  7. 7.
    Butnariu, D., Reich, S., Zaslavski, A.J.: Stable convergence theorems for infinite products and powers of nonexpansive mappings. Num. Funct. Anal. Optim. 29, 304–323 (2008)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Carlen, E., Madiman, M., Werner, E.M. (eds.): Convexity and Concentration, The IMA Volumes in Mathematics and its Applications, vol. 161. Springer, New York (2017)Google Scholar
  9. 9.
    Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Springer, Berlin (2012)zbMATHGoogle Scholar
  10. 10.
    Censor, Y.: Superiorization and perturbation resilience of algorithms: A bibliography compiled and continuously updated. (2019) arXiv:1506.04219. http://math.haifa.ac.il/yair/bib-superiorization-censor.html (last updated: September 26)
  11. 11.
    Censor, Y.: Weak and strong superiorization: Between feasibility-seeking and minimization. Anal. Stiint. Univ. Ovidius Constanta Ser. Mat. 23, 41–54 (2015)MathSciNetzbMATHGoogle Scholar
  12. 12.
    Censor, Y.: Can linear superiorization be useful for linear optimization problems? Inverse Problems33, (2017), 044006 (22 pp.)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Censor, Y., Davidi, R., Herman, G.T.: Perturbation resilience and superiorization of iterative algorithms. Inverse Probl. 26, 065008 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Censor, Y., Davidi, R., Herman, G.T., Schulte, R.W., Tetruashvili, L.: Projected subgradient minimization versus superiorization. J. Optim. Theory Appl. 160, 730–747 (2014)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Censor, Y., Herman, G.T., Jiang, M. (Editors): Superiorization: Theory and Applications, Inverse Problems33, (2017). Special IssueGoogle Scholar
  16. 16.
    Censor, Y., Zaslavski, A.J.: Convergence and perturbation resilience of dynamic string-averaging projection methods. Comput. Optim. Appl. 54, 65–76 (2013)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Censor, Y., Zaslavski, A.J.: Strict Fejér monotonicity by superiorization of feasibility-seeking projection methods. J. Optim. Theory Appl. 165, 172–187 (2015)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Censor, Y., Zur, Y.: Linear superiorization for infeasible linear programming, in: Y. Kochetov, M. Khachay, V. Beresnev, E. Nurminski and P. Pardalos (Editors), Discrete Optimization and Operations Research, Lecture Notes in Computer Science (LNCS), Vol. 9869, Springer, pp. 15–24 (2016)Google Scholar
  19. 19.
    Davidi, R.: Algorithms for Superiorization and their Applications to Image Reconstruction, Ph.D. dissertation, Department of Computer Science, The City University of New York, NY, USA, (2010). http://gradworks.umi.com/34/26/3426727.html
  20. 20.
    Davidi, R., Garduño, E., Herman, G.T., Langthaler, O., Rowland, S.W., Sardana, S., Ye, Z.: SNARK14: A programming system for the reconstruction of 2D images from 1D projections. Available at: http://turing.iimas.unam.mx/SNARK14M/. Latest Manuel of October 29, (2017) is at: http://turing.iimas.unam.mx/SNARK14M/SNARK14.pdf
  21. 21.
    Davidi, R., Herman, G.T., Censor, Y.: Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 16, 505–524 (2009)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Dubhashi, D.P., Panconesi, A.: Concentration of Measure for the Analysis of Randomised Algorithms. Cambridge University Press, New York (2009)CrossRefGoogle Scholar
  23. 23.
    Edelman, A., Rao, N.R.: Random matrix theory. Acta Numer. 14, 233–297 (2005)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Gromov, M.: Spaces and questions. In: Alon, N., Bourgain, J., Connes, A., Gromov, M., Milman, V. (eds.) Visions in Mathematics, pp. 118–161. Modern Birkhäuser Classics. Birkhäuser Basel, Basel (2010)CrossRefGoogle Scholar
  25. 25.
    Herman, G.T.: Superiorization for image analysis, In: Combinatorial Image Analysis, Lecture Notes in Computer Science Vol. 8466, Springer pp. 1–7 (2014)Google Scholar
  26. 26.
    Herman, G.T., Garduño, E., Davidi, R., Censor, Y.: Superiorization: an optimization heuristic for medical physics. Med. Phys. 39, 5532–5546 (2012)CrossRefGoogle Scholar
  27. 27.
    Lange, K.: Singular Value Decomposition. In: Numerical Analysis for Statisticians. Springer, New York, pp. 129–142 (2010)CrossRefGoogle Scholar
  28. 28.
    Ledoux, M.: The Concentration of Measure Phenomenon, Mathematical surveys and monographs Vol. 89, The American Mathematical Society (AMS), (2001)Google Scholar
  29. 29.
    Lee, J.M.: Introduction to Riemannian Manifolds, 2nd Edition. Springer International Publishing, Graduate Texts in Mathematics, Vol. 176, Originally published with title “Riemannian Manifolds: An Introduction to Curvature” (2018)Google Scholar
  30. 30.
    Samson, P.-M.: Concentration of measure principle and entropy-inequalities. In: Carlen, E., Madiman, M., Werner, E.M. (eds.) Convexity and Concentration, The IMA Volumes in Mathematics and its Applications, vol. 161, pp. 55–105. Springer, New York (2017)Google Scholar
  31. 31.
    Seneta, E.: A tricentenary history of the law of large numbers. Bernoulli 19, 1088–1121 (2013)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Shapiro, A.: Differentiability properties of metric projections onto convex sets. J. Optim. Theory Appl. 169, 953–964 (2016)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Sidky, E.Y., Pan, X.: Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization. Phys. Med. Biol. 53, 4777–4807 (2008)CrossRefGoogle Scholar
  34. 34.
    S̆ilhavý, M.: Differentiability of the metric projection onto a convex set with singular boundary points. J. Convex Anal. 22, 969–997 (2015)MathSciNetGoogle Scholar
  35. 35.
    Simon, H.A.: Rational choice and the structure of the environment. Psychol. Rev. 63, 129–138 (1956)CrossRefGoogle Scholar
  36. 36.
    Song, D., Gupta, A.: \(L_{p}\)-norm uniform distribution. Proc. Am. Math. Soc. 125, 595–601 (1997)CrossRefGoogle Scholar
  37. 37.
    Talagrand, M.: A new look at independence. Ann. Prob. 24, 1–34 (1996)MathSciNetCrossRefGoogle Scholar
  38. 38.
    Zhang, X.: Prior-Knowledge-Based Optimization Approaches for CT Metal Artifact Reduction, Ph.D. dissertation, Dept. of Electrical Engineering, Stanford University, Stanford, CA, USA, (2013). http://purl.stanford.edu/ws303zb5770

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of MathematicsUniversity of HaifaHaifaIsrael
  2. 2.Department of MathematicsTechnion – Israel Institute of TechnologyTechnion CityIsrael

Personalised recommendations