I'd like to start by stating this isn't homework! I'm studying for a job interview and would appreciate a second opinion. (Well, I guess it is homework, but not for school!).
I've written an algorithm (see pseudocode below) to merge two sorted lists into one sorted list. The problem requires that I implement it as a recursive divide-and-conquer algorithm. My algorithm is recursive and works but does it count as divide-and-conquer?
The reason I'm asking is that the other people working on the same problem insist D&C must divide the lists in half every time and have $O(n \log n)$ complexity, like quicksort and mergesort. My algorithm doesn't divide the lists in the middle and has a complexity of $O(n+m)$ (where $n$ and $m$ are the lengths of the lists).
To summarize my question: Does a D&C algorithm have to have $O(n \log n)$ complexity and divide the problem in half every time? Or does this algorithm count as D&C?
merge_sorted_lists(list1, list2)     if(list1 and list2 are empty)         return empty list      if(list1 is empty)         return list2      else if(list2 is empty)         return list1      else if(head of list1 < head of list2)         smaller = pop head of list1      else if(head of list2 < head of list1)         smaller = pop head of list2      return smaller + merge_sorted_lists(list1, list2) P.S. I've implemented the algorithm in Java and it works.
Asked By : Barry Fruitman
Answered By : Raphael
"Divide & Conquer" is not a very fixed notion. It usually goes along the lines of
D&C algorithms divide the problem and solve recursively.
That's not very strong, for the following reasons.
- What I would call inductive algorithms do this, but are usually not considered divide & conquer algorithms. Given a problem of size $n$, inductive algorithms solve subproblems (typically one) of size $n-c$ for some constant $c$ and derive a solution for the original problem. See here for an example. 
- Dynamic programming does exactly that, but it's not usually considered d&c. The difference is that in DP, subproblems overlap, which they don't in d&c. Also, we often don't know which subproblem is the "right" one, so we try out all possible partitionings (which causes overlapping). 
Now, one might say that inductive and d&c algorithms are orthogonal special cases of dynamic programming, but that's for another discussion.
In order to "define" divide & conquer against those other types of algorithms, consider this:
D&C algorithms divide the problem of size $n$ into non-overlapping¹ subproblems of size $\frac{n}{c}$, solves (some of²) them recursively and combines the solutions to a solution of the original problem.
By this "definition", your algorithm is not divide & conquer.
As for runtimes, d&q algorithms can typically be analysed (roughly) using the master theorem. As you can see from the three cases, all kinds of runtimes of the form $n^a\log^bn$ can result, depending on the quality of the division and the runtime of the combining step.
- As I write the part about runtimes, it occurs to me that this may be too strong. Overlaps by a constant (sized part) are fine.
- Binary search can be considered the ultimate divide & conquer: by dividing cleverly, combining is unnecessary.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/10518
 
0 comments:
Post a Comment
Let us know your responses and feedback