World's most popular travel blog for travel bloggers.

[Solved]: Richard Karp's 21 NP-Hard problems, the meaning of his research?

, , No Comments
Problem Detail: 

In Richard Karp's paper "Reducability among combinatorial problems" he lists 21 NP-Hard problems. Though I can somewhat understand the ideas and motivation behind the paper I am searching for some clarity.

From what I know and understand this paper showed the parallels between the various problems. Specifically that they are all abstractly the same problem. That given a solution (specifically a P-time solution?) to one of these problems you could solve all of them because they are all statements of a more abstract problem.

Is this a proper summary of the work he presents? Is there a more eloquent and proper yet short way to say it?

Karp's paper: http://www.cs.berkeley.edu/~luca/cs172/karp.pdf

Asked By : KDecker

Answered By : Luke Mathieson

The short way to say it is that these problems are $\mathcal{NP}$-complete.

Of course this only has meaning to those who understand what $\mathcal{NP}$-complete means.

Not only does this say these problems are all somehow equivalent (polynomial-time equivalent to be marginally more precise), but that you can use them to solve any problem in $\mathcal{NP}$, not just each other, with only a polynomial blow-up in the time cost. This immediately gives us that if any of them have a polynomial-time algorithm, then everything in $\mathcal{NP}$ does, and hence we would have $\mathcal{P} = \mathcal{NP}$ (summed up in Theorem 3 from the paper).

To see what Karp's paper in particular represents however, we have to pay attention to a touch of history (not all that usual for most computer scientists I know, but bear with me ;) ). This paper was published in 1972, it was only the year before that Cook proved that $\mathsf{Satisfiability}$ was $\mathcal{NP}$-complete , this was the first "natural" problem proven $\mathcal{NP}$-complete1, but Cook doesn't mention $\mathcal{NP}$, let alone $\mathcal{NP}$-completeness (and he used a weaker type of reduction than Karp, but that's another story). So what you see in Karp's paper is the first codification some of the ideas that have formed one of the central parts of complexity theory, not only does he get these ideas out into the literature, he also says "hey, there's a lot of problems that we're interested that are $\mathcal{NP}$-complete".

So in one go, he coherently defines $\mathcal{NP}$-completeness, shows what it means for solving problems efficiently2, and gives a bunch of (interesting) problems that are $\mathcal{NP}$-complete.

In some sense it is the founding document of complexity theory. It took a lot of ideas that were starting to gain currency, and put them together in a (relatively3) clear statement of how all this complexity stuff can work.

Footnotes

  1. The definition of $\mathcal{NP}$ almost immediately gives complete problems - anything along the lines of "Does this non-deterministic Turing Machine halt in a polynomial number of steps?" or similar, but these are not the most interesting problems in the world (to most people).
  2. Using Edmonds' idea of deterministic polynomial-time as a working definition of "efficient".
  3. Though the reductions are fairly cursor and opaque by modern standards. On the other hand, he crams 21 in there.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/39730

0 comments:

Post a Comment

Let us know your responses and feedback