World's most popular travel blog for travel bloggers.

[Solved]: Time complexity of Depth First Search

, , No Comments
Problem Detail: 

Please forgive me for asking a novice question, but I'm a beginner at algorithms and complexities, and it's sometimes hard to understand how the complexity for a specific algorithm has come about.

I was reading the DFS algorithm from Introduction to Algorithms by Cormen, and this was the algorithm:

G        -> graph G.V      -> set of vertices in G u.π      -> parent vertex of vertex u G.Adj[u] -> adjacency list of vertex u   DFS(G) 1   for each vertex u ∈ G.V 2       u.color = WHITE 3       u.π = NIL 4   time = 0 5   for each vertex u ∈ G.V 6       if u.color == WHITE 7         DFS-VISIT(G,u)  DFS-VISIT(G,u) 1   time = time + 1            // white vertex u has just been discovered 2   u.d = time 3   u.color = GRAY 4   for each v ∈ G.Adj[u]      // explore edge u 5       if v.color == WHITE 6           v.π = u 7           DFS-VISIT(G,v) 8   u.color = BLACK            // blacken u; it is finished 9   time = time + 1 10  u.f = time 

It then said lines 1-3 and 5-7 are O(V), exclusive of the time to execute the calls to DFS-VISIT(). In DFS-VISIT(), lines 4-7 are O(E), because the sum of the adjacency lists of all the vertices is the number of edges. And then it concluded that the total complexity of DFS() is O(V + E).

I don't understand how that came about. DFS-VISIT() is called inside DFS(). So, if lines 5-7 of DFS() are O(V) and DFS-VISIT() is O(E), then shouldn't the total time complexity of DFS() be O(VE)?

Asked By : Sidharth Samant

Answered By : Yuval Filmus

The book is counting the number of times each line is executed throughout the entire execution of a call of DFS, rather than the number of times it is executed in each call of the subroutine DFS-VISIT. Perhaps the following simpler example will make this clear:

PROCEDURE A(n) 1  global = 0 2  for i from 1 to n: 3      B(i) 4  return global  PROCEDURE B(i) 1  global = global + 1 

In each execution of A, line 1 of B is executed $n$ times, and B itself is executed $n$ times. Nevertheless, the running time is $O(n)$ rather than $O(n^2)$.

Here is another example, in which an array $T[1\ldots n]$ is involved.

PROCEDURE COUNT(n, T[1...n]): 1  count = 0 2  index = 1 3  while index <= n: 4    ADVANCE() 5    count = count + 1 6    index = index + 1 7  return count - 1  PROCEDURE ADVANCE(): 1  while index <= n and T[index] == 0: 2    index = index + 1 

The procedure COUNT counts the number of 1s in the input array T. Even though ADVANCE could be called up to $n$ times by COUNT and the worst-case running time of ADVANCE is $O(n)$, lines 1–2 of ADVANCE run at most $n$ times, and so the overall running time is $O(n)$ rather than $O(n^2)$.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/60640

0 comments:

Post a Comment

Let us know your responses and feedback