If I understand correctly, the complexity of solving a computational problem is defined in terms of which instance $I$ of the problem, what size $n$ of the problem instance, and what algorithm $A$ for solving the problem instance.
Is the complexity of the problem at a given size $n$ of the problem instance defined as $$ \min_A \max_{I\in \{\text{instances of size }n\}} \text{complexity}(A,I,n)? $$ Note that the solution to the above optimization $A^*(n)$ and $I^*(n)$ are both functions of instance size $n$.
Or is the complexity of the problem defined for some same algorithm for all the problem instances and all sizes of the problem instances?
Or is the complexity of the problem defined for some same instance for all the algorithms that solve the problem and all the problem instance sizes?
Asked By : Tim
Answered By : Raphael
You are missing one thing, namely that the optimal algorithm and worst instance are not simply functions of the instance size. Also, I am not sure that minimising/maximising algorithm and instance independently of each other is doing the right thing.
The complexity of a problem is the complexity of an optimal algorithm solving the problem in the chosen machine model. It's as simple as that, but of course we have to define what "optimal algorithm solving the problem" means. Note that we have to fix the machine model (in particular what to count, see below), as complexities can differ between models.
First, "solving the problem" is clear: for all inputs (that are instances of the problem), the algorithm should give the correct answer, i.e. terminate and produce the correct result, or don't terminate. Let's denote the set of all algorithms that solve problem $P$ with $\newcommand{\Alg}[1]{\mathsf{Alg}_{#1}}\Alg{P}$.
Second, what is the "complexity" of an algorithm? There are many ways to investigate runtime; complexity theory has defaulted to worst-case time and Landau classes. Formally, we define the (worst-case) runtime function of some algorithm $A$ to be
$\qquad \displaystyle T_A(n) = \max \{ T_A(x) \mid x \in I_P, |x|= n \}$
where $I_P$ is the set of instances of the problem $P$ (which $A$ solves) and $T_A(x)$¹ is the runtime of $A$ on input $x$² (in clock cycles, state transitions, whatever fits your machine model). Now, the complexity of $A$ is the asymptotic growth of $T_A$. For example, if $T_A(n) = 17n^2 - 5n + 13$, we would say "The complexity of $A$ is $\Theta(n^2)$".
Finally, an optimal algorithm (for $P$) is an algorithm with minimal complexity (among all algorithms in $\Alg{P}$)³.
- We abuse notation here: depending on its parameter, $T_A$ means different things. It is usually done this way, though, and we don't need the runtime on individual inputs later on.
- Let's assume for the moment that we deal only with deterministic algorithms. Otherwise, there may be many different runtimes for a given input. In that case, we would choose the runtime of the longest computation; see here for reasons why.
- Note that Landau-$O$ implies a partial order on functions: $f \leq g \iff \exists c \in \mathbb{N}.\,\limsup\limits_{n \to \infty} \frac{f(n)}{g(n)} \leq c$.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/4884
0 comments:
Post a Comment
Let us know your responses and feedback