I have encountered the following problem that I found very interesting to solve:
Given an array of positive integers $\{a_1, a_2, ..., a_n\}$ you are required to partition the array into $3$ blocks/partitions such that the maximum of sums of integers in each partition is the minimum it can be. Restriction: you cannot alter the turn in which the numbers appear (example: if you have $\{2, 5, 80, 1, 200, 80, 8000, 90\}$ one partition CANNOT be the $\{2, 80, 1, 90\}$). The program must output ONLY the maximum sum, not the partitions.
So, for example let's have the array $\{2, 80, 50, 42, 1, 1, 1, 2\}$. The best partitioning according to the problem is $$\{\, \{2, 80\},\, \{50\},\, \{42, 1, 1, 1, 2\} \,\}$$, so the output of the program in this case would be $82$.
I have already thought of a $\mathcal{O}(n^2)$ algorithm, but isn't there any better ( e.g. $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$ ) algorithm?
My $\mathcal{O}(n^2)$ algorithm is (it is pseudocode):
- input $n \in \mathbb{Z}$
- Let $m \leftarrow -1$
- Let $r_1 \leftarrow r_2 \leftarrow r_3 \leftarrow 0$
- Let $A \leftarrow (a_0,...,a_{n-1})$
- Let $S \leftarrow \sum_{i=0}^{n-1}{a_i}$
- for each $i = 1$ until $n-2$ do
- $\quad r_1 \leftarrow (r_1 + a_{i-1})$
- $\quad r_2 \leftarrow 0$
- $\quad$ for each $j = (i+1)$ until $n-1$ do
- $\quad\quad r_2 \leftarrow (r_2 + a_{j-1})$
- $\quad\quad r_3 \leftarrow S - (r_2 + r_1)$
- $\quad\quad \max_{\mathsf{temp}} \leftarrow \max(\max(r_1,r_2),r_3)$
- $\quad\quad$if $(\max_{\mathsf{temp}} < m \, \vee m = -1)$ then
- $\quad\quad\quad m \leftarrow \max_{\mathsf{temp}}$
- $\quad\quad$endif
- return $m$
Asked By : Jason
Answered By : babou
Since you find it interesting to solve, I do not want to completely spoil it too much for you. So here is a big hint to solve it in linear time.
You start with two indices $i$ and $j$ which are the first and last indices of central segment.
Initially you set $i=0$ and $j=n$. And you compute the 3 sums in $s_1$, $s_2$ and $s_3$. So initially $s_1=s_3=0$.
Then you progressively increase $i$ and decrease $j$, by variations of 1, keeping $s_1$ and $s_3$ about the same value (by doing the change on the smallest sum $s_1$ or $s_3$), until one becomes bigger than $s_2$. Each increment or decrement takes constant time (adding or substracting the value of an element to two of the sums, and doing a few comparisons).
When one of $s_1$ and $s_3$ becomes larger than $s_2$, say for example $s_1$. Then you have $s_1\geq s_2 > s_3$.
Then you know that the right value for $i$ is either the current value or the previous value. Typically it is the previous value of i, if $s_1$ has become greater that the previous value of $s_2$. But things are a bit more subtle. So you note this value of $s_1$ as $m_1$, a possibility for the maximum, and then try to see if you could have achieved a lower value for the maximum with the previous value of $i$ and $s_1$, just before the last increase. So you revert $i$ to that previous value, and you start to decrease $j$ to get the lowest possible maximum for $s_2$ and $s_3$ without touching $i$ (that is easy, and recall that $s_1$ remains lower than both others sums). Let $m_{2,3}$ be the lowest maximum found. Now, if $m_{2,3}$ is greater than $m_1$, then the answer is $m_1$, else the answer is $m_{2,3}$.
And you must consider the symmetrical case when $s_3$ is the first to exceed $s_2$.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/38330
0 comments:
Post a Comment
Let us know your responses and feedback