World's most popular travel blog for travel bloggers.

[Solved]: Computing the number of bits of a large power of integer

, , No Comments
Problem Detail: 

Given two integers $x$ and $n$ in binary representation, what is the complexity of computing the bit-size of $x^n$?

One way to do so is to compute $1+\lfloor \log_2(x^n)\rfloor=1+\lfloor n\log_2(x)\rfloor$ by computing an approximation of $\log_2(x)$ with sufficient precision. It appears that computing $\log_2(x)$ with $k$ bits of precisions can be done in $O(M(k)\log k)$ where $M(k)$ is the time needed to compute the product of two integers of length $k$. This yields a (not specially simple) algorithm of complexity approximately $O(s\log^2 s)$ if $s$ is a bound on the bitsize of both $x$ and $n$ (if I made no error).

Can we beat $O(s\log^2(s))$ where $s$ is the size of $x$ and $n$ (in the case where they have comparable sizes)? Is there a simple algorithm to get this complexity or better?

Note: I am interested in the complexity in a theoretical model such as Turing machines.

Asked By : Bruno

Answered By : Bruno

[edit] As suggested, I edit my answer to give more details.

The answer to my second question is no:

Proposition. Computing $\log(x)$ up to precision $k$ is at least as hard as computing the bit-size of $x^{2^k}$.

Proof. Let $|y|$ denote the bit-size of an integer $y$. First notice that for a nonnegative integer $y$, the bit-size of $y$ is $1+\lfloor \log y\rfloor$.

Thus, $\left|x^{2^k}\right| = 1+\lfloor 2^k\log x\rfloor$. Now $2^k\log(x)$ is $\log(x)$ shifted $k$ positions to the left. Thus one can compute $\log(x)$ to precision $k$ by simply subtracting $1$ to the bit-size of $x^{2^k}$ and shifting the result $k$ positions to the right.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/27966

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback