I heard that random number generation in computers isn't really random, but there is no efficient algorithm to detect it. How can it be detected at all ?
Asked By : URL87
Answered By : SamM
Computers Being Really Random:
True randomness is impossible for Turing Machines in a theoretical sense, and most computers can't generate truly random output. Therefore, some modern computers include hardware that allows the computer to access an outside source which will hopefully include some randomness. One example of how this can be accomplished is to track small fluctuations in temperature inside the computer. Randomness can be obtained from an outside source as well. But from the tone of your post I don't think that outside sources of randomness are what you're interested in.
Seeds:
Without an outside addition, everything a computer does is deterministic. This leads to a big issue: if you call a random number generation program, it will give you the same result every time if you give it the same input. Clearly, we need a program that outputs a random number to change its behavior each time it's run (otherwise we'll keep getting the same "random" number, which is not particularly helpful). One idea is to give the program some input, which changes each time the program is run, so that a different number will be output. We call this input a "seed." The random number generator needs to take in a seed, perform some operations, and give us a random number.
The current system time is a classic example of a seed. This gives a long string with high entropy, and if the time is kept track of in a sufficiently granular fashion (i.e. if your system clock uses hours then "time" is a pretty poor seed), you're unlikely to feed the pseudorandom number generator the same number twice.
Algorithms that are Random Enough:
Now we have an algorithm that at least has some way to be different each time it's run. We give it a seed, and while the algorithm gives the same number when prompted with the same seed, we want the numbers it generates to be random otherwise. This acts like the above--you take in some input, and it produces some (hopefully different enough from the input to be "random") output.
Now let's say you came up with your own algorithm to do this, and you claim that the numbers you come up with are pretty close to random when you gave it a bunch of different seeds. How would we test how good it is?
Now we want some algorithm that will take in a seed, do some operations, and produce a random number. At the simplest, the algorithm could just output the seed--it's not giving us the same number each time, and random seeds give us random outputs. But clearly that's not what we want. On the other hand, an algorithm can be fairly complicated, like many actual pseudorandom generators. How can we tell which algorithms give us "random" numbers from our not-necessarily-random seeds? If we can't get it exactly, how can we tell which are best?
It's hard to tell which of those tests are ideal, but it's easy to come up with some minimum requirements these algorithms should meet before we say they give us "random" numbers. Maybe we want to make sure that your algorithm gives even numbers half the time. Maybe we want to make sure that if I ask for a random number between $1$ and $n$, all numbers in that range will be output for some input to your function. Clearly there are a lot of tests we can run; if your algorithm passes some suite of tests it's a pseudorandom generator. Which tests to use is a very interesting and well used/studied area of computer science.
Random Enough to Fool an Attacker:
Now what you MAY be referring to is Cryptographially Secure Pseudorandom Generators. I think the best way to explain this is in the context of the above--here, we're using our randomness for cryptography, so when we're designing tests what we really care about is that someone won't be able to break our security by predicting what random number we picked. I don't know your level of familiarity with cryptography, but imagine we're doing a simple replacement cypher---each letter is replaced with some other letter. We want to pick these replacements randomly, so they're hard for an attacker to guess. But if he can figure out how my random number generator works, he'll be able to solve the whole cipher! Therefore, cryptographic algorithms require random number generators that are specifically hard to guess. Specific cryptographic algorithms may require additional tests (like for some sort of nice-enough distribution as mentioned above).
For this reason, CSPRGs are defined in terms of how well other algorithms solve them (which is where we finally come to your question). Specifically, let's say I have a CSPRG which I'll call R. R is a CSPRG if and only if there is NO feasible algorithm that can guess which bit it will output next. This is true even if you know all the previous bits it output!
So let's say that the first five bits my CSPRG has output are 10100. You don't know the input I used to the program, but you have access to the code I used to write my CSPRG. Then the claim is that it's impossible for you to write a program to decide if the next bit output will be 101000 or 101001.
So for reasons of cryptography, sometimes how well a pseudorandom number generator does is defined in terms of how predictable it is to other programs. Note that this still gives much of the intuition of "randomness," as (say) if you know all of the random outputs will be odd it is neither cryptographically secure nor does it pass a common-sense randomness test.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/7729
0 comments:
Post a Comment
Let us know your responses and feedback