World's most popular travel blog for travel bloggers.

[Solved]: The difference between theoretical complexity and practical efficiency

, , No Comments
Problem Detail: 

If I have this pseudocode:

for i=0 to n/2 do    for j=0 to n/2 do         ... do anything .... 

The number of iterations is $n^2/4$.

What is the complexity of this program? Is it $O(n^2)$?

What is the intuition formal/informal for which is that complexity?

Next, if i have this other pseudocode :

for i=0 to n do    for j=0 to n do         ... do anything .... 

The complexity is again $O(n^2)$ -- is that correct?

Is there any difference between efficiency in practice and theoretical complexity in this case? Is one of these faster than the other?

Asked By : kafka

Answered By : Auberon

Big O Formally

$O(f(n)) = \{g(n) | \exists c > 0, \exists n_0 > 0, \forall n > n_0 : 0 \leq g(n) \leq c*f(n)\}$

Big O Informally

$O(f(n))$ is the set of functions that grow slower (or equally fast) than $f(n)$. This means that whatever function you pick from $O(f(n))$, let's name her $g(n)$, and you pick a sufficiently large $n$, $f(n)$ will be larger than $g(n)$. Or, in even other words, $f(n)$ will eventually surpass any function in $O(f(n))$ when $n$ grows bigger. $n$ is the input size of your algorithm.


As an example. Let's pick $f(n) = n^2$.

We can say that $f(n) \in O(n^3)$. Note that, over time, we allowed notation abuse so almost everyone writes $f(n) = O(n^3)$ now.

Normally when we evaluate algorithms, we look at their order. This way, we can predict how the running time of the algorithm will increase when the input size ($n$) increases.

In your example, both algorithms are of order $O(n^2)$. So, their running time will increase the same way (quadratically) as $n$ increases. Your second algorithm will be four times slower than the first one, but that is something we're usually not interested in **theoretically*. Algorithms with the same order can have different factors ($c$ in the formal definition) or different lower order terms in the function that evaluates the number of steps. But usually that's because of implementation details and we're not interested in that.

If you have a algorithm that runs in $O(log(n))$ time we can safely say it will be faster than $O(n)$ [1] because $log(n) = O(n)$. No matter how bad the $O(log(n))$ algorithm is implemented, no matter how much overhead you put in the algorithm, it will always [1] be faster than the $O(n)$ algorithm. Even if the number of steps in the algorithms are $9999*log(n) = O(log(n))$ and $0.0001*n = O(n)$, the $O(log(n))$ algorithm will eventually be faster [1].

But, maybe, in your application, $n$ is always low and will never be sufficiently large enough so that the $O(log(n))$ algorithm will be faster in practice. So using the $O(n)$ algorithm will result in faster running times, despite $n = O(log(n))$

[1] if $n$ is sufficiently large.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/54699

0 comments:

Post a Comment

Let us know your responses and feedback