Complexity and Real Computation

Complexity and Real Computation is a book on the computational complexity theory of real computation. It studies algorithms whose inputs and outputs are real numbers, using the Blum–Shub–Smale machine as its model of computation. For instance, this theory is capable of addressing a question posed in 1991 by Roger Penrose in The Emperor's New Mind: "is the Mandelbrot set computable?"[1]

The book was written by Lenore Blum, Felipe Cucker, Michael Shub and Stephen Smale, with a foreword by Richard M. Karp, and published by Springer-Verlag in 1998 (doi:10.1007/978-1-4612-0701-6, ISBN 0-387-98281-7).[2]

Purpose

Stephen Vavasis observes that this book fills a significant gap in the literature: although theoretical computer scientists working on discrete algorithms had been studying models of computation and their implications for the complexity of algorithms since the 1970s, researchers in numerical algorithms had for the most part failed to define their model of computation, leaving their results on a shaky foundation. Beyond the goal of making this aspect of the topic more well-founded, the book also has the aims of presenting new results in the complexity theory of real-number computation, and of collecting previously-known results in this theory.[3]

Topics

The introduction of the book reprints the paper "Complexity and real computation: a manifesto", previously published by the same authors. This manifesto explains why classical discrete models of computation such as the Turing machine are inadequate for the study of numerical problems in areas such as scientific computing and computational geometry, motivating the newer model studied in the book. Following this, the book is divided into three parts.[2]

Part I of the book sets up models of computation over any ring, with unit cost per ring operation. It provides analogues of recursion theory and of the P versus NP problem in each case, and proves the existence of NP-complete problems analogously to the proof of the Cook–Levin theorem in the classical model, which can be seen as the special case of this theory for modulo-2 arithmetic. The ring of integers is studied as a particular example, as are the algebraically closed fields of characteristic zero, which are shown from the point of view of NP-completeness within their computational models to all be equivalent to the complex numbers.[2] (Eric Bach points out that this equivalence can be seen as a form of the Lefschetz principle.)[4]

Part II focuses on numerical approximation algorithms, on the use of Newton's method for these algorithms, and on author Stephen Smale's alpha theory for numerical certification of the accuracy of the results of these computations. Other topics considered in this section include finding the roots of polynomials and the intersection points of algebraic curves, the condition number of systems of equations, and the time complexity of linear programming with rational coefficients.[2]

Part III provides analogues of structural complexity theory and descriptive complexity theory for real-number computation, including many separations of complexity classes that are provable in this theory even though the analogous separations in classical complexity theory remain unproven. A key tool in this area is the use of the number of connected components of a semialgebraic set to provide a lower bound on the time complexity of an associated computational problem.[2]

Audience and reception

The book is aimed at the level of a graduate student or researcher in these topics,[2][3] and in places it assumes background knowledge of classical computational complexity theory, differential geometry, topology, and dynamical systems.[3][4]

Reviewer Klaus Meer writes that the book is "very well written", "perfect to use on a graduate level", and well-represents both the state of the art in this area and the strong connections that can be made between fields as diverse as algebraic number theory, algebraic geometry, mathematical logic, and numerical analysis.[2]

As a minor criticism, aimed more at the Blum–Shub–Smale model than the book, Stephen Vavasis observes that (unlike with Turing machines) seemingly-minor details in the model, such as the ability to compute the floor and ceiling functions, can make big differences in what is computable and how efficiently it can be computed. However, Vavasis writes, "this difficulty is probably inherent in the topic".[3] Relatedly, Eric Bach complains that assigning unit cost to all arithmetic operations can give a misleading idea of a problem's complexity in practical computation,[4] and Vavasis also notes that, as of the publication date of his review, this work had seemingly had little effect on practical research in scientific computing. Despite these issues, he recommends the book as a convenient and clearly-written compendium of the theory of numerical computing.[3]

gollark: http://www.omniglot.com/language/phrases/hovercraft.htm
gollark: I've never actually checked how snakey the UK is.
gollark: Please solve a captcha and send the result back.
gollark: *nοncaringness ιntensifies*
gollark: This is not a good thing.

References

  1. McNicholl, Timothy H. (June 2001), "Review of Complexity and Real Computation", SIGACT News, 32 (2): 14–15, doi:10.1145/504192.1005765
  2. Meer, Klaus (1999), "Review of Complexity and Real Computation", Mathematical Reviews, MR 1479636
  3. Vavasis, Stephen A. (June 1999), "Review of Complexity and Real Computation", SIAM Review, 41 (2): 407–409, JSTOR 2653097
  4. Bach, Eric (2001), "Review of Complexity and Real Computation", Discrete Dynamics in Nature and Society, 6: 145–146, doi:10.1155/S1026022601000152
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.