There's a lot of debate about what exactly the Church-Turing thesis is, but roughly it's the argument that "undecidable" should be considered equivalent to "undecidable by a universal turing machine."
I'm wondering if there's an analogous statement for time complexity, i.e. an argument that if some language is decided in $\Theta\left(f(n)\right)$ on a universal turing machine, then we should say its time complexity is $\Theta\left(f(n)\right)$.
This isn't equivalent to the CT thesis - e.g. quantum computers decide precisely those languages which are decidable in a non-quantum TM, but they may run that decision procedure more quickly.
Asked By : Xodarap
Answered By : Niel de Beaudrap
Short answer
The thesis that all reasonable models of computation are polynomially-equivalent — that is, in the amount of work that they perform; whether sequentially or in parallel, albeit different by some polynomial scaling factor — is generally known as the Strong Church-Turing thesis or the Extended Church-Turing Thesis.
Details
The idea that all reasonable models of computation are equivalent up to polynomial factors is most important to two groups of researchers:
Those working in quantum computation (who most often believe that it is false, or at least that if it were true that it would have very important implications for physics), and
Those who are specifically skeptical towards the feasibility of quantum computers (who believe that the thesis is true, typically because quantum computers are the only thing which has come close to a 'reasonable' model of computation, and because the feel that quantum computers would somehow requires access to an unreasonable amount of information to carry out computations).
I started doing a bit of a literature search starting from Sid's link to the Computational Complexity blog entry above. That blog entry specifically concerned the Strong Church Turing Thesis, and asked whether anyone actually cared about such an "extended" version of the Church-Turing thesis before quantum computing came along.
It turns out that people have been trying to formulate this notion for a while however. It would seem that at the very least, the Strong Church-Turing Thesis arose naturally from concerns about what "reasonable computational models" could or could not achieve, in the context of sequential versus parallelized computing. There is both a "sequential" and "parallelized" computational thesis, which concern how different either sequential or parallelized models of computing can be with respect to computational power. Here are some quotations showing some of the history of the sequential thesis, which would be the one which quantum computing would be most naturally construed to violate.
The belief that all reasonable sequential computers which will ever be dreamed of have polynomially related execution times is called the sequential computation thesis.
— Goldschlager, Lister. Computer science: a modern introduction. Prentice/Hall International, 1982 (p. 96).
[The] sequential computation thesis [10] states that time on all "reasonable" sequential machine models is related by a polynomial.
— Parberry, Parallel speedup of sequential machines: a defense of parallel computation thesis. ACM SIGACT News 18 (pp. 54-67), 1986.
"Reasonable" machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space.
— van Emde Boas, Machine Models and Simulations. Handbook of Theoretical Computer Science A, Elsevier, 1990.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/2587
0 comments:
Post a Comment
Let us know your responses and feedback