World's most popular travel blog for travel bloggers.

# [Solved]: "Compressing" rationals given error bounds

, ,
Problem Detail:

I'm working on implementing some exact real arithmetic operations for fun. I've got the rough outline of how I want to do things as well and have figured out how to write most of the important algorithms (though I have not properly implemented them just yet).

Going against the "pre-mature optimization is bad, m'kay" philosophy I am wondering about a few things. So reducing fractions is one way to reduce the size of rationals, sure, but it isn't perfect. If you knew you only had to be within a certain range you could do better by changing the value. For instance say I need to add $1$ and $\frac{1}{8}$ to with in $\frac{1}{2}$. I don't need to full precision, I can just return 0 in fact. More beneficially say I had something like $\frac{2^{16}+1}{2^{32}}$, if I only need say something like $\frac{1}{2^{15}}$ precision, I can approximate this rational as $\frac{2^{16}}{2^{32}}$ and then reduce that to $\frac{1}{2^{16}}$.

This are some contrived cases but I think there is a generally useful optimization to be done here. Are there algorithms which, given an error bound, can find the smallest (in bits) rational number within the specified bounds of another given rational number?

Most of my algorithms tend to produce rationals with power of two denominators so I am particularly interested in that case.

A brute force solution is to trace the line ay=bx in integer coordinates from the origin to (a,b). Any pair with a smaller number of bits will be along that line segment. This method allows finding either the minimum distance or the smallest (x,y) within the error bounds. It requires o(a+b) steps, and there's plenty of redundancy possible, since eg both (x,y) and (kx,ky) might be found for k >1.

Tracing the line refers to iteratively moving either up or right according to which move minimizes distance to the line; it's a common method for graphics.