Asymptotic computational complexity

In computational complexity theory, asymptotic computational complexity is the usage of asymptotic analysis for the estimation of computational complexity of algorithms and computational problems, commonly associated with the usage of the big O notation.

Scope

With respect to computational resources, asymptotic time complexity and asymptotic space complexity are commonly estimated. Other asymptotically estimated behavior include circuit complexity and various measures of parallel computation, such as the number of (parallel) processors.

Since the ground-breaking 1965 paper by Juris Hartmanis and Richard E. Stearns[1] and the 1979 book by Michael Garey and David S. Johnson on NP-completeness,[2] the term "computational complexity" (of algorithms) has become commonly referred to as asymptotic computational complexity.

Further, unless specified otherwise, the term "computational complexity" usually refers to the upper bound for the asymptotic computational complexity of an algorithm or a problem, which is usually written in terms of the big O notation, e.g.. Other types of (asymptotic) computational complexity estimates are lower bounds ("Big Omega" notation; e.g., Ω(n)) and asymptotically tight estimates, when the asymptotic upper and lower bounds coincide (written using the "big Theta"; e.g., Θ(n log n)).

A further tacit assumption is that the worst case analysis of computational complexity is in question unless stated otherwise. An alternative approach is probabilistic analysis of algorithms.

Types of algorithms considered

In most practical cases deterministic algorithms or randomized algorithms are discussed, although theoretical computer science also considers nondeterministic algorithms and other advanced models of computation.

gollark: This is an unpleasantly pastel colololr.
gollark: Actually, I'm plotting revenge.
gollark: There is of course the GTech™ Name Research Superlogarithm®.
gollark: Actually, it only recurses 997 times for python compatibility. The last one is just G.
gollark: It stands for GTech™ Tech™.

See also

References

  1. Hartmanis, J.; Stearns, R. E. (1965). "On the computational complexity of algorithms". Transactions of the American Mathematical Society. 117: 285–306. doi:10.1090/S0002-9947-1965-0170805-7.
  2. Michael Garey, and David S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman & Co., 1979.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.