World's most popular travel blog for travel bloggers.

[Solved]: FFT-less $O(n\log n)$ algorithm for pairwise sums

, , No Comments
Problem Detail: 

Suppose we are given $n$ distinct integers $a_1, a_2, \dots, a_n$, such that $0 \le a_i \le kn$ for some constant $k \gt 0$, and for all $i$.

We are interested in finding the counts of all the possible pairwise sums $S_{ij} = a_i + a_j$. ($i = j$ is allowed).

One algorithm is to construct the polynomial $P(x) = \sum_{j=1}^{n} x^{a_j}$ of degree $\le kn$, and compute its square using the Fourier transform method and read off the powers with their coefficients in the resulting polynomial. This is an $O(n \log n)$ time algorithm.

I have two questions:

  • Is there an $O(n \log n)$ algorithm which does not use FFT?

  • Are better algorithms known (i.e $o(n \log n)$)? (FFT allowed).

Asked By : Aryabhata

Answered By : Realz Slaw

It would seem that this problem is equivalent to integer/polynomial squaring:

1. It is known that polynomial multiplication is equivalent to integer multiplication.

2. Obviously, you already reduced the problem to polynomial/integer squaring; therefore this problem is at most as hard as squaring.

Now I will reduce integer squaring to this problem:

Suppose you had an algorithm:

$$ F(\mathbf{\vec a})\rightarrow P^2(x),\\ \text{where } P(x)=\sum_{a_i \in \mathbf{\vec a}} x^{a_i} $$

This algorithm is essentially the algorithm you request in your question. Thus, if I had a magic algorithm that can do this, I can make a function, ${\rm S{\small QUARE}}\left(y\right)$ that will square the integer $y$ (oh yes, I do love mathjax :P):

$$ \begin{array}{rlr}\hline &\mathbf{\text{Algorithm 1}} \text{ Squaring}&\hspace{2em}&\\ \hline {\tiny 1.:}&\mathbf {\text{procedure }} {\rm S{\small QUARE}}\left(y\right):\\ {\tiny 2.:}&\hspace{2em}\mathbf {\vec a} \leftarrow () &&\triangleright~\mathbf {\vec a}\text{ starts as empty polynomial sequence}\\ {\tiny 3.:}&\hspace{2em}i \leftarrow 0\\ {\tiny 4.:}&\hspace{2em}\mathbf{while}~y\ne0~\mathbf{do} &&\triangleright~\text{break }y\text{ down into a polynomial of base }2\\ {\tiny 5.:}&\hspace{4em}\mathbf{if}~y~\&~1~\mathbf{then} &&\triangleright~\text{if lsb of }y\text{ is set}\\ {\tiny 6.:}&\hspace{6em}\mathbf{\vec a} \leftarrow \mathbf{\vec a}i &&\triangleright~\text{append }i\text{ to }\mathbf{\vec a}~\text{(appending }x^i\text{)}\\ {\tiny 7.:}&\hspace{4em}\mathbf{end~if}\\ {\tiny 8.:}&\hspace{4em}i \leftarrow i + 1\\ {\tiny 9.:}&\hspace{4em}y \leftarrow y \gg 1 &&\triangleright~\text{shift }y\text{ right by one}\\ {\tiny 10.:}&\hspace{2em}\mathbf{end~while}\\ {\tiny 11.:}&\hspace{2em}P^2(x) \leftarrow F\left(\mathbf{\vec a}\right) &&\triangleright~\text{obtain the squared polynomial via } F\left(\mathbf{\vec a}\right)\\ {\tiny 12.:}&\hspace{2em}\mathbf{return}~P^2(2) &&\triangleright~\text{simply sum up the polynomial}\\ {\tiny 13.:}&\mathbf {\text{end procedure}}\\ \hline &\end{array} $$

Python (test with codepad):

#http://cs.stackexchange.com/q/11418/2755  def F(a):     n = len(a)     for i in range(n):         assert a[i] >= 0      # (r) => coefficient     # coefficient \cdot x^{r}     S = {}     for ai in a:         for aj in a:             r = ai + aj              if r not in S:                 S[r] = 0              S[r] += 1      return list(S.items())  def SQUARE(x):     x = int(x)      a = []     i = 0     while x != 0:         if x & 1 == 1:             a += [i]         x >>= 1         i += 1      print 'a:',a     P2 = F(a)      print 'P^2:',P2      s = 0     for e,c in P2:         s += (1 << e)*c     return s 

3. Thus, squaring is at most as hard as this problem.

4. Therefore, integer squaring is equivalent to this problem. (they are each at most as hard as each-other, due to (2,3,1))

Now it is unknown if integer/polynomial multiplication admits bounds better than $\mathcal O(n\log n)$; in fact the best multiplication algorithms currently all use FFT and have run-times like $\mathcal O(n \log n \log \log n)$ (Schönhage-Strassen algorithm) and $\mathcal O\left(n \log n\,2^{\mathcal O(\log^* n)}\right)$ (Fürer's algorithm). Arnold Schönhage and Volker Strassen conjectured a lower bound of $\Omega\left(n \log n\right)$, and so far this seems to be holding.

This doesn't mean your use of FFT is quicker; $\mathcal O\left(n\log n\right)$ for FFT is the number of operations (I think), not the bit complexity; hence it ignores some factors of smaller multiplications; when used recursively, it would become closer to the FFT multiplication algorithms listed above (see Where is the mistake in this apparently-O(n lg n) multiplication algorithm?).

5. Now, your problem is not exactly multiplication, it is squaring. So is squaring easier? Well, it is an open problem (no for now): squaring is not known to have a faster algorithm than multiplication. If you could find a better algorithm for your problem than using multiplication; then this would likely be a breakthrough.

So as of now, the answer to both your questions is: no, as of now, all the ~$\mathcal O(n\log n)$ multiplication algorithms use FFT; and as of now squaring is as hard as multiplication. And no, unless a faster algorithm for squaring is found, or multiplication breaks the $\mathcal O(n\log n)$ barrier, your problem cannot be solved faster than $\mathcal O(n \log n)$; in fact, it cannot currently be solved in $\mathcal O(n\log n)$ either, as the best multiplication algorithm only approaches that complexity.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/11418

0 comments:

Post a Comment

Let us know your responses and feedback