World's most popular travel blog for travel bloggers.

[Solved]: Exponential-size numbers in NP completeness reduction

, , No Comments
Problem Detail: 

In the proof of Theorem 4 in [GS'12], the authors reduce an instance of PARTITION to their problem. Therefore, they create for each element $a_i$ in the instance of PARTITION a number $2^{c \cdot a_i}$ for a suitable constant $c$, which is later used in the reduction. They argue, that the instance remains of polynomial size since these exponential-size numbers can be encoded implicitly. Nevertheless, can we really work with those numbers in polynomial time? What if we add such two numbers $2^{c \cdot a_i}$ and $2^{c \cdot a_j}$ for $a_i \neq a_j$ in the course of the algorithm, then the resulting number cannot be encoded in this way any longer. Is this reduction valid?

[GS'12]: Martin Groß and Martin Skutella, "Generalized maximum flows over time", 2012.

Asked By : user1742364

Answered By : Kaveh

Changing the encoding of the inputs changes the problem. When inputs are given succinctly the problem can become much harder. On the surface of what you are saying they are proving the NP-hardness of a succinct version of the problem, not the original problem.

You can also think of it as a reduction from Partition where numbers are given in unary to the original problem. The unary version of Partition is in P. So does not show the problem is NP-hard. (It might be possible to use 3-Partion in place of Partition which remains NP-hard even when numbers in the input are encoded in unary.)

The issue is that it is not clear what are the inputs to the problem. They seem to be working with a version of the problem where $\gamma$s are not part of the input to the problem but are determined from $\tau_e$ which is part of the input. In the theorem it seems that $\gamma_e$ is fixed to be $2^{c\tau_e}$ (where $c$ is a fixed constant) and are not part of the input. If $\gamma$ is not part of the input and fixed to be determined from $\tau$ as stated then it doesn't matter that they are exponentially large.

In summery, the paper is not very clear in its treatment of the issue. It seems that in the theorem they are considering those numbers as not being part of the input to the problem (determined from $\tau_e$). Or that they are given in a succinct form of $a2^e$. The paragraph after the theorem where they say that it is open whether the results holds in case the usual binary encoding of numbers is used seems to support that. So they are not proving that the problem where $\gamma$ is part of the input and encoded as normal binary numbers is NP-hard. That is still open according to them. They are proving that variants of the problem mentioned above are NP-hard.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/29203

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback