In many papers I've read that it is well known that the Shannon entropy of a random variable can be converted to min-entropy (up to small statistical distance) by taking independent copies of the variable. Can anyone explain to me what exactly this means?
Asked By : Gian Pietro
Answered By : Yuval Filmus
This is a consequence of the asymptotic equipartition property (AEP), which is a form of the law of large numbers. The AEP states that if a random variable has (binary) entropy $H$ and you take $n$ copies of it, then most of the data points have probability roughly $2^{-nH}$. This is only true for most data points, which is the source of the small statistical distance that you mention.
As an example, consider a $p$-biased coin. If you throw $n$ coins with bias $p$, then most likely you will get roughly $pn$ heads, each of which events has probability roughly $$ p^{pn} (1-p)^{(1-p)n} = 2^{-nh(p)} .$$
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/10862
0 comments:
Post a Comment
Let us know your responses and feedback