I have noticed that a lot of problems that are in L and NL use binary numbers. I don't understand why this is the case. Does a TM use less space by storing a binary number, than a "normal" one. In my head, it uses less space to store the number 0, than the binary ecquivalent number 0000. Hope someone can help me understand :)
Asked By : user2795095
Answered By : Yuval Filmus
The "usual" encoding of numbers is binary. Binary can mean two things:
- Encoding a number in the range $0,\ldots,2^n - 1$ using $n$ bits.
- Encoding an arbitrary natural number.
In the latter case, the "width" or "length" of the number is not known in advance, and so some special encoding has to be used. An especially simple one encodes a number $b_0 \ldots b_{n-1}$ as $$ 1b_0 1b_1 \ldots 1b_{n-1} 0. $$ This is known as a self-delimiting encoding. If you allow the input alphabet to be larger, then you can use a non-binary character as a separator instead, which is a popular convention.
We are more used to decimal numbers rather than binary numbers. In terms of encoding length, the two differ by a multiplicative constant, so binary encoding and decimal encoding are equivalent for most purposes. Binary encoding is used since sometimes we imagine the input encoded as a string over $\{0,1\}$ (this is the case when we talk about circuits, for example).
The other option is unary encoding, in which the number $n$ is encoded by $1^n 0$ (that is, $n$ times the bit 1 followed by the bit 0).
Encoding $n$ takes $n+1$ bits in unary, but only $\Theta(\log n)$ bits in binary. This can make a huge difference in complexity theory, since the running time is measured in the size of the input. For many problems, however, the number parameters pale in length compared to other inputs (say an input graph), and so the exact encoding used is not important.
Here is an example in which encoding is important. Many cryptographic algorithms include a subroutine that accepts $n$ and outputs a key of length $n$. If the input is encoded in binary then its length is $\Theta(\log n)$, and so the subroutine must run in exponential time (since the output is of length $n$). If the input is encoded in unary then there is no such restriction. This is why in such algorithms, $n$ must be encoded in unary.
In other cases, encoding an input parameter in unary might make the problem too easy, and so it is important to encode it succinctly. Unfortunately, I can't think of any such problem right now, since as stated before, usually the number parameters are dwarfed in length by other inputs.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/25970
0 comments:
Post a Comment
Let us know your responses and feedback