**Problem Detail:**

Suppose you have the following game:

There are infinitely many counters $\{c_1,c_2,\ldots\}$, all initialized to 0.

In each step, you may choose a counter $c_i$ and increase it's value by 1.

Unfortunately, every $T$ steps, each counter that has a positive value is decremented by 1.

Also, the values of the counters are bounded by $M$, so you can't increment a counter any further.

**Given as many steps as you like, can many positive valued counters can you reach?**

EDIT: As this question is unanswered for two weeks, I have posted it in CSTheory as well.

Detailed buildup for $\approx T\log(M)$ positive counters:

While you have less than $T-1$ counters at value $M$:

- Increment the minimal index counter whose value is strictly less than $M$. This has to converge as the sum of the counters are bound to increase every $T$ steps.

Let $r = T$.

While ($c_0>0$)

a. while ($c_0>c_r$)

- Increment $c_r$

b. $r = r + 1$

Now for the analysis: first observation is that the number of positive counters is $r$.

Now let $m_r$ be the maximal value $c_{r}$ have reached. For $r=T$ we get $M(1-\frac{1}{T})$. For $r=T+1$ we get $m_r(1-\frac{1}{T})=M(1-\frac{1}{T})^2$, or in general $$\forall r\geq T:m_r = M(1-\frac{1}{T})^{r-T+1}$$

Next we notice that when $\forall m_r$ is achieved, $c_0=m_r$. This means the loop will halt when $m_r < 1$ (give or take integrality and end-of-game-strategies).

This gives us $$M(1-\frac{1}{T})^{r-T+1} < 1$$ $$(1-\frac{1}{T})^{r-T+1} < M^{-1}$$ $$({r-T+1}) \log (1-\frac{1}{T}) < -\log M$$ $${r-T+1} < \frac{-\log M}{\log (1-\frac{1}{T})}$$ $$r < \frac{-\log M}{\log (1-\frac{1}{T})} + T - 1$$ $$r < \frac{\log M}{\sum_{k\geq 1}^{\infty} {\frac{1}{kT^k}}} + T - 1 < T(\log M + 1) -1$$

Does this look right?

Is it possible to do better? Can anyone prove this is optimal?

###### Asked By : R B

#### Answered By : FrankW

Let's collect what facts we can determine:

The sum of all counters $S$ will change by $T-P$ with each cycle of $T$ steps, where $P$ is the number of positive counters at the final step of the cycle.

If you end a cycle with $P=T-1$, you have $S\le(T-1)(M-1)$. During the following cyle you can reach $S=(T-1)M$ after $T-1$ steps, but in the next step you will have to increase a $T$-th counter, so the decrease after that step will return to $S=(T-1)(M-1)$. So $(T-1)M$ is an upper bound for $S$ (and thus also for $P$).

Assume there is an optimal strategy, that never reaches $S=(T-1)M$. The step where that strategy reached its highest value of $S$, has to be the penultimate step of a cycle, and the cycle before that one has to have ended with $P\le T-1$. By inserting a cycle that simply increases some or all of the nonzero counters before the cycle that reaches maximum $S$, we can increase the maximum value of $S$ for that strategy without influencing the maximum value of $P$.

Thus we can restrict our search for an optimal strategy to strategies that start by bringing $T-1$ counters up to $M$ and then try to distribute the sum amongst as many counters as possible.The maximum value of $P$ will be reached at the penultimate step of a cycle. (At any other point we can raise it by increasing counters that were previously 0 until that step of the current cycle.) We denote by $X$ the number of cycles (including the incomplete one) in the distribution phase.

We can assume that any counter we increase during the distribution phase will be nonzero in the $X$-th cycle. Otherwise not spending these increases would not alter the outcome. For the same reason we can assume that each nonzero counter is at value 1 in the $X$-th cycle.

From the previous item we can conclude that any counter increased in the 1st cycle has to be increase $X$ times, any counter increased for the first time in the 2nd cycle has to be increased $X-1$ times and so on. This implies that it is beneficial to start increasing each counter as late as possible.

In the last cycle we cann set $T-1$ counters to 1. In the cycle before that, we can set $\frac T2$ counters to 2, and so on, until in the first $\frac XT$ cycles we can set one counter to $X$. This shows that it is beneficial to make $X$ as large as possible.

$X\le M$, since after the $M$-th cycle all the initial counters will reach 0 (or we have to spend increase operations on them that we could instead have spent to increase other counters).

Since the optimal moves from our considerations describe exactly the strategy from the question, we can conclude that this strategy is indeed optimal.

###### Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/27777

**3.2K people like this**

## 0 comments:

## Post a Comment

Let us know your responses and feedback