World's most popular travel blog for travel bloggers.

[Solved]: Turn biased random number generator into uniform

, , No Comments
Problem Detail: 

I'm looking at a problem in the book Introduction to Algorithms by Cormen et al. It says that if we are given a random number generator rand() which satisfies the distribution:

$P(X = 0) = p,\;\;P(X=1)=1-p,\;\;0<p<1$,

then we can use rand() as a subroutine to create a unbiased sampler from the set {0,1}. My idea for this is along the following lines:

  1. Choose some fairly large even integer n.

  2. Create a table "t" containing two integers.

  3. Loop over $i=1,\ldots,n$ sampling $x=rand()$ for each $i$. When $i$ is even increment $t[i]$ and when odd increment $t[1-i]$.

  4. Return $0$ if $t[0]\geq t[1]$ and vice versa.

Now the expected value for $t[0]=t[1]=n/2$, so this should give a value very close to the uniform distribution. The only problem is that the set of outcomes in (4) is biased towards $0$, so for any fixed $n$, we do not get a uniform distribution, although we do get it in the limit. There has to be a way to get $n$ to depend on some random value to balance it out, but I can't figure out how.

Anyone have any other ideas how to approach this? The key in the problem was that we do not know $p$.

Asked By : JT1

Answered By : Yuval Filmus

For general $p$ you can't generate a random bit using a finite number of samples. Indeed, suppose you could do it using $n$ samples. Let $c_k$ be the number of inputs of Hamming weight $k$ which will cause you to return $1$. For your generator to be unbiased, you need $$ \frac{1}{2} = \sum_{k=0}^n c_k p^k (1-p)^{n-k} = \sum_{\ell=0}^n \left(\sum_{k=\ell}^n c_k (-1)^{\ell-k} \binom{n-k}{\ell-k} \right) p^\ell. $$ If $p$ is transcendental or of the form $a/b$ for odd $b$ then the equation $\frac{1}{2} = \sum_{\ell=0}^n d_\ell p^\ell$ has no integer solutions $d_0,\ldots,d_n$. In particular, even if you know $p$, then in these cases no finite number of samples can be used to extract even one uniformly random bit.

The standard solution, attributed (IIRC) to von Neumann, is to use the following algorithm:

  1. Repeatedly sample two biased samples $r_1,r_2$ until $r_1 \neq r_2$.
  2. Return $r_1$.

Conditioned on $r_1 \neq r_2$, the two outcomes $(r_1,r_2)=(0,1)$ and $(r_1,r_2)=(1,0)$ have the same probability, and so the result is an unbiased bit.

How efficient is this generator? The probability that $r_1 \neq r_2$ is $2p(1-p)$, and so on average you will need $1/(2p(1-p))$ samples per bit. Compare that to the optimal extractor given by Shannon's source coding theorem: extracting $n$ bits should take $n/H(p) + o(n)$ samples with high probability (when $p$ is known).

enter image description here

The plot shows that the von Neumann algorithm is reasonably good but not optimal.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/35218

0 comments:

Post a Comment

Let us know your responses and feedback