World's most popular travel blog for travel bloggers.
Showing posts with label MCS031. Show all posts
Showing posts with label MCS031. Show all posts
Problem Detail: 
This is a sort of edit-distance question, and is very easy. I am just quite brain dead on this subject and can't figure it out so far.

Given a series of numbers, e.g.
[3, 1, 1, 1] 
How would one most efficiently turn all of the numbers into the same number, with the minimum number of "moves"? By "move" is meant adding or removing one from a number.
In the above example, the most efficient moves would be:
[1, 1, 1, 1] 
This would require 2 moves, reducing the first number twice.
I can't figure out the best way to find this out, given much larger arrays of hundreds of numbers.
I originally tried computing the rounded average number (sum of all divided by length), and then reducing them to the computed average, but the above example broke this, requiring 4 moves instead of 2.
I suppose I could figure:
  1. The average,
  2. The mode,
  3. The median
and get the edit distance of each of them, choosing the minimum distance. However, I am not sure that this would be correct in every single instance. How can I know?

Asked By : dthree

Answered By : mhum

The answer is to take the median. One of the properties of the median is that it minimizes the L1 distance to each element. (To make sense of the Wikipedia article, take the probability distribution to be the uniform distribution over your original series of numbers).
This is the algorithm which solves the problem (originally written by dc2):
function median(arr) {   arr.sort(function(a, b) { return a - b; });   var half = floor(arr.length/2);   if ( arr.length % 2 ) {     return arr[half];   } else {     return (arr[half-1] + arr[half]) / 2.0;   } }  function minl1(arr) {   var moves = 0;   var mdn = median(arr);   for ( var i = 0; i < arr.length; ++i ) {     moves += Math.abs(mdn - arr[i]);   }   return moves; }  minl1([3, 1, 1, 1]); // -> 2 
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/24469
http://www.conceptdraw.com/

http://www.conceptdraw.com/

Verify the expression n! = O(n^n)
Verify the expression n! = O(n^n)

Heap Complexity

The part just shown very similar to removal from a heap which is O(log(n)). You do it n-1 times so it is O(nlog(n)). The last steps are cheaper but for the reverse reason from the building of the heap, most are log(n) so it is O(nlog(n)) overall for this part. The build part was O(n) so it does not dominate. For the whole heap sort you get O(nlog(n)).

There is no extra memory except a few for local temporaries.

Thus, we have finally achieved a comparison sort that uses no extra memory and is O(nlog(n)) in the worst case.

In many cases people still use quick sort because it uses no extra memory and is usually O(nlog(n)). Quick sort runs faster than heap sort in practice. The worst case of O(n2) is not seen in practice.

Heap Complexity

The part just shown very similar to removal from a heap which is O(log(n)). You do it n-1 times so it is O(nlog(n)). The last steps are cheaper but for the reverse reason from the building of the heap, most are log(n) so it is O(nlog(n)) overall for this part. The build part was O(n) so it does not dominate. For the whole heap sort you get O(nlog(n)).

There is no extra memory except a few for local temporaries.

Thus, we have finally achieved a comparison sort that uses no extra memory and is O(nlog(n)) in the worst case.

In many cases people still use quick sort because it uses no extra memory and is usually O(nlog(n)). Quick sort runs faster than heap sort in practice. The worst case of O(n2) is not seen in practice.

Complexity Analysis of Bubble Sorting

In Bubble Sort, n-1 comparisons will be done in 1st pass, n-2 in 2nd pass, n-3 in 3rd pass and so on. So the total number of comparisons will be
(n-1)+(n-2)+(n-3)+.....+3+2+1
Sum = n(n-1)/2
i.e O(n2)
Hence the complexity of Bubble Sort is O(n2).
The main advantage of Bubble Sort is the simplicity of the algorithm.Space complexity for Bubble Sort is O(1), because only single additional memory space is required for temp variable
Best-case Time Complexity will be O(n), it is when the list is already sorted.

Complexity Analysis of Bubble Sorting

In Bubble Sort, n-1 comparisons will be done in 1st pass, n-2 in 2nd pass, n-3 in 3rd pass and so on. So the total number of comparisons will be
(n-1)+(n-2)+(n-3)+.....+3+2+1
Sum = n(n-1)/2
i.e O(n2)
Hence the complexity of Bubble Sort is O(n2).
The main advantage of Bubble Sort is the simplicity of the algorithm.Space complexity for Bubble Sort is O(1), because only single additional memory space is required for temp variable
Best-case Time Complexity will be O(n), it is when the list is already sorted.
We assume that we're sorting a total of n elements in the entire array.
  1. The divide step takes constant time, regardless of the subarray size. After all, the divide step just computes the midpoint q of the indices p and r. Recall that in big-Θ notation, we indicate constant time by \Theta(1).
  2. The conquer step, where we recursively sort two subarrays of approximately n/2 elements each, takes some amount of time, but we'll account for that time when we consider the subproblems.
  3. The combine step merges a total of n elements, taking \Theta(n) time.
If we think about the divide and combine steps together, the \Theta(1) running time for the divide step is a low-order term when compared with the \Theta(n)running time of the combine step. So let's think of the divide and combine steps together as taking \Theta(n) time. To make things more concrete, let's say that the divide and combine steps together take cn time for some constant c.
Let's draw out the merging times in a "tree":
First merge sort tree
Computer scientists draw trees upside-down from how actual trees grow. Atree is a graph with no cycles (paths that start and end at the same place). Convention is to call the vertices in a tree its nodes. The root node is on top; here, the root is labeled with the n subarray size for the original n-element array. Below the root are two child nodes, each labeled n/2 to represent the subarray sizes for the two subproblems of size n/2.
First merge sort tree
What do you think would happen for the subproblems of size n/8? There will be eight of them, and the merging time for each will be cn/8, for a total merging time of 8 \cdot cn/8 = cn:
First merge sort tree
As the subproblems get smaller, the number of subproblems doubles at each "level" of the recursion, but the merging time halves. The doubling and halving cancel each other out, and so the total merging time is cn at each level of recursion. Eventually, we get down to subproblems of size 1: the base case. We have to spend \Theta(1) time to sort subarrays of size 1, because we have to test whether p < r, and this test takes time. How many subarrays of size 1 are there? Since we started with n elements, there must be n of them. Since each base case takes \Theta(1) time, let's say that altogether, the base cases take cntime:
First merge sort tree
When we use big-Θ notation to describe this running time, we can discard the low-order term (+1) and the constant coefficient (c), giving us a running time of \Theta(n \lg n), as promised.

Read more from Source: https://www.khanacademy.org/computing/computer-science/algorithms/merge-sort/a/analysis-of-merge-sort