I was researching the topic of Fibonacci numbers and asymptotic complexity of generating them. Coming across a seemingly paradoxical conclusion, I've decided to check out if you agree with my explanation.
The naive algorithms runs in $O(n)$, if we ignore the cost of addition.
The Binet's formula or matrix exponentiation method should both theoretically run in $O(\lg(n))$ (since exponentiation of both matrices and real numbers takes $O(\lg(n))$ steps)
The problem arises when you analyze the size of Nth Fibonacci number by assuming that after first few members of the sequence, the ratio between two numbers is at least 1.5 (picked randomly, I assume it can be easily proved by induction).
We can then bound the number to be at least as big as: $c_1*(1.5)^n$. Its logarithm gives us the number of digits the Nth Fibonacci number has ($c_2*n$). Am I right to assume that you can't print out (or calculate) linear number of digits in sublinear time?
My explanation of this "paradox" is that I forgot to add multiplication costs in 2nd algorithm (Binet/matrix), making its complexity $n*\lg(n)$. I've found out that naive algorithm (1st) runs better for very small inputs, and 2nd algorithm runs better for bigger ones (python3).
Is my explanation of complexity correct, and should the naive algorithm get better running time at even larger inputs ($n>10^9$ or such)?
I do not consider this to be a duplicate question, since it adresses the problem of arbitary values and arbitary integer aritmetic.
Asked By : Petar Mihalj
Answered By : sashas
Let's assume that you could store such large Fibonacci numbers. In that case, the length of $n^{th}$ Fibonacci number is $\lfloor n \log_{10}\phi+1\rfloor$. That is the length of the $n^{th}$ Fibonacci number is $\Theta(n)$.
If you go by the naive method, then calculating the $n^{th}$ Fibonacci number would cost you $\Theta(n^2)$ time.
In comparison, matrix exponentiation will take $O(n^2\log n)$ time, assuming you use schoolbook long multiplication to multiply numbers. (A slightly more refined analysis shows that it is in fact $\Theta(n^2)$, by summing a geometric series.) But if instead we use Karatsuba multiplication , the running time to compute the $n$th Fibonacci number using matrix exponentiation becomes $O(n^{1.585}\log n)$. (Analogous to before, a slightly more refined analysis shows that the running time is in fact $\Theta(n^{1.585})$.) Thus, if you use sophisticated multiplication algorithms, matrix exponentiation would be better for large $n$ compared to the naive algorithm.
In short: which method is better depends on which multiplication algorithm used.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/62401
0 comments:
Post a Comment
Let us know your responses and feedback