I stumbled upon the following statement:
By using a $ d $-ary heap with $ d = m/n $, the total times for these two types of operations may be balanced against each other, leading to a total time of $ O(m \log_{m/n} n) $ for the algorithm, an improvement over the $ O(m \log n) $ running time of binary heap versions of these algorithms whenever the number of edges is significantly larger than the number of vertices
I don't understand why we choose to have a heap where nodes have exactly $ m/n $ children to speed up Dijkstra's algorithm. Remove min takes overall $ O(n \log_d n) $ time and decrease takes $ O(m \log_d n) $, so total runtime is $ O(m \log_d n) $.
What I don't understand is say we have $ m=3, n=1 $, $ m/n $ gives 3, but $ O(m \log_3 n) $ is slower than $ O(m \log_4 n) $, so why not choose 4 as value of $ d $ instead? What motivates taking $ m/n $?
Thanks!
Asked By : user1354784
Answered By : Raphael
whenever the number of edges is significantly larger than the number of vertices
What they probably mean is that $m \in \omega(n)$. Then, $m/n \in \omega(1)$ -- the base of the logarithm grows with $n$, that is the resulting sequence of values grows more slowly than with a fixed base.
For dense graphs in the sense that $m \in \Theta(n^2)$, the effect is most pronounced. For the sake of simplicity, say $m = cn^2$ for some $c \in (0,1)$. Then, $m/n = cn$ and therewith $\log_{m/n} n = \log_{cn} n \in O(1)$ -- the logarithmic factor has vanished, the algorithm has running-time in $O(m)$.
Note that all of this is discussing only the upper Landau-bound. In which way the true running-time cost is affected by this change is not per se clear, and the Wikipedia article is bold in claiming that it's an "improvement".
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/55611
0 comments:
Post a Comment
Let us know your responses and feedback