World's most popular travel blog for travel bloggers.

# Do Approximation Algorithms Analyzed in the Worst Case?

, ,
Problem Detail:

From wikipedia:

For some approximation algorithms it is possible to prove certain properties about the approximation of the optimum result. For example, a $ρ$-approximation algorithm $A$ is defined to be an algorithm for which it has been proven that the value/cost, $f(x)$, of the approximate solution $A(x)$ to an instance $x$ will not be more (or less, depending on the situation) than a factor $ρ$ times the value, OPT, of an optimum solution.

$$\begin{cases}\mathrm{OPT} \leq f(x) \leq \rho \mathrm{OPT},\qquad\mbox{if } \rho > 1; \\ \rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT},\qquad\mbox{if } \rho < 1.\end{cases}$$

For a maximization problem, a $\rho$-approximation algorithm means that, for an instance $x$, $\rho \mathrm{OPT} \leq f(x) \leq \mathrm{OPT}$, for any $\rho<1$. Is this mean that in the worst case the algorithm produces a solution that is at least $\rho \mathrm{OPT}$? Or is this in the average case? Because there are some instances where the algorithm find the optimal solution.

I mean, if the algorithm is executed, say $10000$ times (with randomly generated instances) and produces $\mathrm{OPT}$, say $9995$ times and $5$ times it produces a value that is greater than $\rho\mathrm{OPT}$. What can we say about the algorithm then?

#### Answered By : Tom van der Zanden

It's a worst case guarantee. It can in some cases do better than $\rho OPT$, but it will never do worse.

In your example we can say very little about the algorithm since you've only tested it with relatively few random instances of a given size. On non-random (structured) inputs it might perform much worse.