World's most popular travel blog for travel bloggers.

Relation of Space and Time in Complexity?

, , No Comments
Problem Detail: 

I'm looking for some clarification on some concepts/facts I came across while studying for a class.

I was reading the following wikipedia article. The below specific section and statement intrigued me when looking it over.

http://en.wikipedia.org/wiki/Computational_complexity_theory#Important_complexity_classes "It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitch's theorem"

I also read that NTMs can be simulated by DTMs but that the shortest accepting computation of the DTM is exponential with respect to the shortest accepting computation of the target NTM.

My questions are:

1.) Are PSPACE and NPSPACE the set of all problems that require at least polynomial space to be solved on Deterministic and Non-deterministic Turing machines respectively?

2.) If so, is the actual size of the polynomial space required dependent on the size of the input?

3.) For P and NP, they are each the sets of problems that require at least polynomial time to be solved on DTMs and NTMs respectively correct?

4.) Is the reason that the shortest accepting computation of a DTM simulating a target NTM is exponential with respect to the shortest accepting computation of an NTM due to the exponential explosion of the number of configurations that an NTM supports as input grows for a given problem?

5.) My last and overarching question is: Are the differences in the set of problems that can be solved in polynomial time on DTMs versus NTMs related to time/space tradeoffs where DTMs can't run some polynomial NTM algorithms in polynomial time because they don't have the same "space" that an NTM has available to it?

I'd also appreciate any reading you can suggest to me on time/space tradeoffs and NTMs versus DTMs.

Asked By : Ethan Willis

Answered By : David Richerby

That's rather a lot of questions for one Stack Exchange post. One or two closely related questions per post is best. But here goes...

1.) Are PSPACE and NPSPACE the set of all problems that require at least polynomial space to be solved on Deterministic and Non-deterministic Turing machines respectively?

No. They're the set of all problems that require at most polynomial space to be solved on deterministic and nondeterministic Turing machines, respectively.

2.) If so, is the actual size of the polynomial space required dependent on the size of the input?

No. For any given machine, the polynomial is fixed. The input to the polynomial is the length of the input. (E.g., if the machine runs in space $3n^2+4$ and the input has length 3, the Turing machine will need at most 31 cells of storage on its work tape.)

3.) For P and NP, they are each the sets of problems that require at least polynomial time to be solved on DTMs and NTMs respectively correct?

Again, at most.

4.) Is the reason that the shortest accepting computation of a DTM simulating a target NTM is exponential with respect to the shortest accepting computation of an NTM due to the exponential explosion of the number of configurations that an NTM supports as input grows for a given problem?

We don't know for sure that there's any loss of efficiency at all when simulating a nondeterministic machine on a deterministic one. But the reason we don't know how to do it with better than exponential slowdown is, as you suggest, that we don't know any better technique than trying all the possible nondeterministic choices, one by one.

5.) My last and overarching question is: Are the differences in the set of problems that can be solved in polynomial time on DTMs versus NTMs related to time/space tradeoffs where DTMs can't run some polynomial NTM algorithms in polynomial time because they don't have the same "space" that an NTM has available to it?

Not really. It's because the nondeterministic machine in a sense computes lots of options in parallel and it accepts if any of those parallel options accepts. As I said above, the only way we know how to simulate all of these parallel computations is to do them sequentially, one after another. But there isn't any particular time/space trade-off: if a nondeterministic machine makes some computation in time $t(n)$ and space $s(n)$, a deterministic machine can do the same computation in time $2^O(t(n))$ and space $s(n)+O(t(n))$. The exponential blow-up in time is because a $q$-state nondeterministic machine might make up to $q$ nondeterministic choices (i.e., spawn up to $q$ "parallel threads") at each step of its execution; the smaller increase in space is because of the book-keeping required to keep track of which choices we tried at each step.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/22109

0 comments:

Post a Comment

Let us know your responses and feedback