I have an NP-complete decision problem. Given an instance of the problem, I would like to design an algorithm that outputs YES, if the problem is feasible, and, NO, otherwise. (Of course, if the algorithm is not optimal, it will make errors.)
I cannot find any approximation algorithms for such problems. I was looking specifically for SAT and I found in Wikipedia page about Approximation Algorithm the following: Another limitation of the approach is that it applies only to optimization problems and not to "pure" decision problems like satisfiability, although it is often possible to ...
Why we do not, for example, define the approximation ratio to be something proportional to the number of mistakes that the algorithm makes? How do we actually solve decision problems in greedy and sub-optimal manner?
Asked By : Riebuoz
Answered By : D.W.
Approximation algorithms are only for optimization problems, not for decision problems.
Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be confusing to use the same term for two different things.
OK, could we define some other ratio (let's call it something else -- e.g., "the det-ratio") that quantifies the number of mistakes an algorithm makes, for some decision problem? Well, it's not clear how to do that. What would be the denominator for that fraction? Or, to put it another way: there are going to be an infinite number of problem instances, and for some of them the algorithm will give the right answer and others it will give the wrong answer, so you end up with a ratio that is "something divided by infinity", and that ends up being meaningless or not defined.
Alternatively, we could define $r_n$ to be the fraction of mistakes the algorithm mistakes, on problem instances of size $n$. Then, we could compute the limit of $r_n$ as $n \to \infty$, if such a limit exists. This would be well-defined (if the limit exists). However, in most cases, this might not be terribly useful. In particular, it implicitly assumes a uniform distribution on problem instances. However, in the real world, the actual distribution on problem instances may not be uniform -- it is often very far from uniform. Consequently, the number you get in this way is often not as useful as you might hope: it often gives a misleading impression of how good the algorithm is.
To learn more about how people deal with intractability (NP-hardness), take a look at Dealing with intractability: NP-complete problems.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/65708
0 comments:
Post a Comment
Let us know your responses and feedback