World's most popular travel blog for travel bloggers.

# [Solved]: If A is polynomial time reducible to B such that B <= A, does it mean B must be a polynomial time algorithm?

, ,
Problem Detail:

I don't understand what it means for A to be polynomial time reducible to B. I'm guessing is that we can revised the algorithm some how such that it becomes B, where B is a polynomial time algorithm.

Can someone point out if I have this completely wrong?

#### Answered By : Yuval Filmus

A language $A$ is polynomial time reducible to a language $B$ if there is a polynomial time function $f$ such that $x \in A$ iff $f(x) \in B$. In particular, if $B$ can be solvable in polynomial time, then so can $A$.

There are other notions of reducibility in complexity theory, some stronger than this and some weaker. A weaker notion is polynomial time oracle reducibility: $A$ is polynomial time oracle-reducible to $B$ if $A$ can be solved in polynomial time given oracle access to $B$ (this can be defined formally). Stronger notions put more constraints on $f$, such as being computable in logarithmic space.

Weaker notions of reducibility could seem more natural: if $A$ is polynomial time oracle-reducible to $B$ and $B$ can be solved in polynomial time, then so can $A$. Why would we want to put more constraints on our reduction? For example, assuming $\mathsf {NP} \neq \mathsf {coNP}$ (which is a common assumption), an NP-complete cannot be polynomially reduced to its complement. For example, SAT, the language consisting of all satisfiable CNFs, isn't reducible to coSAT, the language consisting of all unsatisfiable CNFs. Intuitively, this seems ludicrous, since if you could find out whether a CNF is unsatisfiable, surely you could find out whether a CNF is satisfiable!

Indeed, Cook first defined NP-hard problems using oracle reduciblity rather than the modern many-one reducibility (described in the first paragraph of this answer). The problem with oracle reducibility is that it is hard to reason about, whereas many-one reducibility leads to a richer complexity theory. Also, the polynomial hieerarchy, which mirrors the classical arithmetical hierarchy from recursion theory, only makes sense using many-one reducibility. Perhaps in the future you will be able to formulate your own opinion on this matter.

When studying the internal structure of $\mathsf P$, we must use stronger notions, since all non-trivial problems in $\mathsf P$ are polynomially reducible to each other (the only exceptions are $\emptyset$ and $\Sigma^*$). The most common notions are logspace reducibility, in which $f$ has to be computable in logarithmic space, and the even stronger AC0 reducibility (also known as first-order reducibility), in which $f$ has to be computable by uniform polynomial size circuits having constant depth.