Why in computer science any complexity which is at most polynomial is considered efficient?
For any practical application(a), algorithms with complexity $n^{\log n}$ are way faster than algorithms that run in time, say, $n^{80}$, but the first is considered inefficient while the latter is efficient. Where's the logic?!
(a) Assume, for instance, the number of atoms in the universe is approximately $10^{80}$.
Asked By : Ran G.
Answered By : Suresh
Another perspective on "efficiency" is that polynomial time allows us to define a notion of "efficiency" that doesn't depend on machine models. Specifically, there's a variant of the Church-Turing thesis called the "effective Church-Turing Thesis" that says that any problem that runs in polynomial time on on kind of machine model will also run in polynomial time on another equally powerful machine model.
This is a weaker statement to the general C-T thesis, and is 'sort of' violated by both randomized algorithms and quantum algorithms, but has not been violated in the sense of being able to solve an NP-hard problem in poly-time by changing the machine model.
This is ultimately the reason why polynomial time is a popular notion in theoryCS. However, most people realize that this does not reflect "practical efficiency". For more on this, Dick Lipton's post on 'galactic algorithms' is a great read.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/210
0 comments:
Post a Comment
Let us know your responses and feedback