In Sipser's textbook "Introduction to the Theory of Computation, Second Edition," he defines nondeterministic time complexity as follows:
Let $N$ be a nondeterministic Turing machine that is a decider. The running time of $N$ is the function $f : \mathbb{N} \rightarrow \mathbb{N}$, where $f(n)$ is the maximum number of steps that $N$ uses on any branch of its computation on any input of length $n$ [...].
Part of this definition says that the running time of the machine $N$ is the maximum number of steps taken by that machine on any branch. Is there a reason that all branches are considered? It seems like the length of the shortest accepting computation would be a better measure (assuming, of course, that the machine halts), since you would never need to run the machine any longer than this before you could conclude whether the machine was going to accept or not.
Asked By : templatetypedef
Answered By : Niel de Beaudrap
Because you don't know ahead of time whether or not any given input is a 'yes' instance — that is, whether there exists any accepting path — it makes sense for the sake of uniformity to bound the run-time independently of any particular feature of the computational paths. Thus, it makes sense to require the worst-case behaviour to be polynomial time, regardless of whether or not any accepting paths exist.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/2887
0 comments:
Post a Comment
Let us know your responses and feedback