World's most popular travel blog for travel bloggers.

[Solved]: How does variance in task completion time affect makespan?

, , No Comments
Problem Detail: 

Let's say that we have a large collection of tasks $\tau_1, \tau_2, ..., \tau_n$ and a collection of identical (in terms of performance) processors $\rho_1, \rho_2, ..., \rho_m$ which operate completely in parallel. For scenarios of interest, we may assume $m \leq n$. Each $\tau_i$ takes some amount of time/cycles to complete once it is assigned to a processor $\rho_j$, and once it is assigned, it cannot be reassigned until completed (processors always eventually complete assigned tasks). Let's assume that each $\tau_i$ takes an amount of time/cycles $X_i$, not known in advance, taken from some discrete random distribution. For this question, we can even assume a simple distribution: $P(X_i = 1) = P(X_i = 5) = 1/2$, and all $X_i$ are pairwise independent. Therefore $\mu_i = 3$ and $\sigma^2 = 4$.

Suppose that, statically, at time/cycle 0, all tasks are assigned as evenly as possible to all processors, uniformly at random; so each processor $\rho_j$ is assigned $n/m$ tasks (we can just as well assume $m | n$ for the purposes of the question). We call the makespan the time/cycle at which the last processor $\rho^*$ to finish its assigned work, finishes the work it was assigned. First question:

As a function of $m$, $n$, and the $X_i$'s, what is the makespan $M$? Specifically, what is $E[M]$? $Var[M]$?

Second question:

Suppose $P(X_i = 2) = P(X_i = 4) = 1/2$, and all $X_i$ are pairwise independent, so $\mu_i = 3$ and $\sigma^2 = 1$. As a function of $m$, $n$, and these new $X_i$'s, what is the makespan? More interestingly, how does it compare to the answer from the first part?

Some simple thought experiments demonstrate the answer to the latter is that the makespan is longer. But how can this be quantified? I will be happy to post an example if this is either (a) controversial or (b) unclear. Depending on the success with this one, I will post a follow-up question about a dynamic assignment scheme under these same assumptions. Thanks in advance!

Analysis of an easy case: $m = 1$

If $m = 1$, then all $n$ tasks are scheduled to the same processor. The makespan $M$ is just the time to complete $n$ tasks in a complete sequential fashion. Therefore, $$\begin{align*} E[M] &= E[X_1 + X_2 + ... + X_n] \\ &= E[X_1] + E[X_2] + ... + E[X_n] \\ &= \mu + \mu + ... + \mu \\ &= n\mu \end{align*}$$ and $$\begin{align*} Var[M] &= Var[X_1 + X_2 + ... + X_n] \\ &= Var[X_1] + Var[X_2] + ... + Var[X_n] \\ &= \sigma^2 + \sigma^2 + ... + \sigma^2 \\ &= n\sigma^2 \\ \end{align*}$$

It seems like it might be possible to use this result to answer the question for $m > 1$; we simply need to find an expression (or close approximation) for $\max(Y_1, Y_2, ..., Y_m)$ where $Y_i = X_{i\frac{n}{m} + 1} + X_{i\frac{n}{m} + 2} + ... + X_{i\frac{n}{m} + \frac{n}{m}}$, a random variable with $\mu_Y = \frac{n}{m}\mu_X$ and $\sigma_Y^2 = \frac{n}{m}\sigma_X^2$. Is this heading in the right direction?

Asked By : Patrick87

Answered By : svinja

As $m = k \times n$, we can look at this in terms of $k$ and $n$ instead of $n$ and $m$. Let's say $T_i$ is the time it takes the $i$-th processor to finish its work.

As $n$ grows, the probability that $T_i$ = $5k$ (the processor was assigned only $T=5$ tasks) for some $i$ approaches $1$, so makespan being defined as $\mathrm{max}(T_i)$, $E[M]$ approaches $5k$.

For the second scenario this is $4k$ so increasing the number of processors makes the 4–2 split better.

What about $k$ — increasing the number of tasks per processor? Increasing $k$ has the opposite effect, it makes it less likely to have a processor with an unlucky set of tasks. I'm going home now but I'll come back to this later. My "hunch" is that as $k$ grows, the difference in $E[M]$ between the 4–2 split and the 5­­­–1 split disappears, $E[M]$ becomes the same for both. So I would assume that 4–2 is always better except maybe for some special cases (very small specific values of $k$ and $n$), if even that.

So to summarize:

  • Lower variance is better, all else being equal.
  • As the number of processors grows, lower variance becomes more important.
  • As the number of tasks per processor grows, lower variance becomes less important.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/1236

0 comments:

Post a Comment

Let us know your responses and feedback