Advertisement

Which Algorithms are Feasible? Maxent Approach

  • D. E. Cooke
  • V. Kreinovich
  • L. Longpré
Part of the Fundamental Theories of Physics book series (FTPH, volume 98)

Abstract

It is well known that not all algorithms are feasible; whether an algorithm is feasible or not depends on how many computational steps this algorithm requires. The problem with the existing definitions of feasibility is that they are rather ad hoc. Our goal is to use the maximum entropy (MaxEnt) approach and get more motivated definitions.

If an algorithm is feasible, then, intuitively, we would expect the following to be true: If we have a flow of problems with finite average length \( \bar{l} \), then we expect the average time \( \bar{t} \) to be finite as well.

Thus, we can say that an algorithm is necessarily feasible if \( \bar{t} \) is finite for every probability distribution for which \( \bar{l} \) is finite, and possibly feasible if \( \bar{t} \) is finite for some probability distribution for which \( \bar{l} \) is finite.

If we consider all possible probability distributions, then these definitions trivialize: every algorithm is possibly feasible, and only linear-time algorithms are necessarily feasible.

To make the definitions less trivial, we will use the main idea of MaxEnt and consider only distributions for which the entropy is the largest possible. Since we are interested in the distributions for which the average length is finite, it is reasonable to define MaxEnt distributions as follows: we fix a number l 0 and consider distributions for which the entropy is the largest among all distributions with the average length \( \bar{l} = {{l}_{0}} \).

If, in the above definitions, we only allow such “MaxEnt” distributions, then the above feasibility notions become non-trivial: an algorithm is possibly feasible if it takes exponential time (to be more precise, if and only if its average running time \( \bar{t} \)(n) over all inputs of length n grows slower than some exponential function C n ), and necessarily feasible if it is sub-exponential (i.e., if \( \bar{t} \)(n) grows slower than any exponential function).

Key words

maximum entropy feasible algorithm average computational complexity Moore’s law 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. E. Cooke, Scientific Research: From the Particular to the General, El Paso Energy Award for Research Excellence Presentation, University of Texas at El Paso, April 9, 1997.Google Scholar
  2. 2.
    J. H. Davenport and J. Heintz, “Real quantifier elimination is doubly exponential”, Journal of Symbolic Computations, 1988, Vol. 5, No. 1/2, pp. 29–35.MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    M. R. Garey and D. S. Johnson, Computers and intractability: a guide to the theory of NP-completeness, W. F. Freeman, San Francisco, 1979.zbMATHGoogle Scholar
  4. 4.
    M. Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and the Complex, Freeman, N.Y., 1994.zbMATHGoogle Scholar
  5. 5.
    K. M. Hanson and R. N. Silver (Eds.), Maximum Entropy and Bayesian Methods, Kluwer Academic Publishers, Dordrecht, 1996.zbMATHGoogle Scholar
  6. 6.
    E. T. Jaynes, “Information theory and statistical mechanics”, Phys. Rev., 1957, Vol. 108, pp. 171–190.MathSciNetCrossRefGoogle Scholar
  7. 7.
    E. T. Jaynes, “Where do we stand on maximum entropy?”, In: R. D. Levine and M. Tribus (Eds.) The maximum entropy formalism, MIT Press, Cambridge, MA, 1979.Google Scholar
  8. 8.
    V. Kreinovich, “Maximum entropy and interval computations”, Reliable Computing, 1996, Vol. 2, No. 1, pp. 63–79.MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    H. R. Lewis and C. H. Papadimitriou, Elements of the Theory of Computation, Prentice-Hall, Inc., New Jersey, 1981.zbMATHGoogle Scholar
  10. 10.
    J. C. Martin, Introduction to languages and the theory of computation, McGraw-Hill, N.Y., 1991.Google Scholar
  11. 11.
    G. E. Moore, “Cramming more components onto integrated circuits”, Electronics Magazine, 1965, Vol. 38, No. 8, pp. 114–117.Google Scholar
  12. 12.
    G. E. Moore, “Lithography and the Future of Moore’s Law”, In Proceedings of the SPIE Conference on Optical/Laser Microlithography, February 1995, SPIE Publ., Vol. 2440,1995, pp. 2–17.Google Scholar
  13. 13.
    D. Morgenstein and V. Kreinovich, “Which algorithms are feasible and which are not depends on the geometry of space-time”, Geombinatorics, 1995, Vol. 4, No. 3, pp. 80–97.zbMATHGoogle Scholar
  14. 14.
    H. T. Nguyen and V. Kreinovich, “When is an algorithm feasible? Soft computing approach”, Proceedings of the Joint 4th IEEE Conference on Fuzzy Systems and 2nd IFES, Yokohama, Japan, March 20–24, 1995, Vol. IV, pp. 2109–2112.Google Scholar
  15. 15.
    H. T. Nguyen and V. Kreinovich, “Towards theoretical foundations of soft computing applications”, International Journal on Uncertainty, Fuzziness, and Knowledge-Based Systems, 1995, Vol. 3, No. 3, pp. 341–373.MathSciNetzbMATHCrossRefGoogle Scholar
  16. 16.
    R. R. Schaller, “Moore’s law: past, present, and future”, IEEE Spectrum, June 1997, pp. 53–59; see also discussion on p. 8 of the August 1997 issue of IEEE Spectrum.Google Scholar
  17. 17.
    D. Schirmer and V. Kreinovich, “Towards a More Realistic Definition of Feasibility”, Bulletin of the European Association for Theoretical Computer Science (EATCS), 1996, Vol. 90, pp. 151–153.Google Scholar
  18. 18.
    A. Tarski, A decision method for elementary algebra and geometry, 2nd ed., Berkeley and Los Angeles, 1951.zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • D. E. Cooke
    • 1
  • V. Kreinovich
    • 1
  • L. Longpré
    • 1
  1. 1.Department of Computer ScienceUniversity of Texas at El PasoEl PasoUSA

Personalised recommendations