I am looking for algorithms to optimize a strictly monotonic function $f$ such that $f(x) < y$
$f : [a,b] \longrightarrow [c,d] \qquad \text{where } [a,b] \subset {\mathbb N}, [c,d] \subset {\mathbb N}$
such that $\arg\max{_x} f(x) < y$
My first idea was to use a variant of binary search, pick a point $x$ in $[a,b]$ at random; if $f(x) > y$ then we eliminate $[x, b]$, and if $f(x) < y$ we eliminate $[a, x]$. We repeat this procedure until the solution is found.
Do you have any other ideas to maximize the function $f$ ?
Asked By : Ghassen Hamrouni
Answered By : Merbs
Given the discrete intervals $[a,b]\subset\mathbb{N}$ and $[c,d]\subset\mathbb{N}$, we might be able to do slightly better than binary search using the midpoint without utilizing gradient descent.
Instead, we could use binary search with $x=a+\lfloor\frac{y-c}{d-c}(b-a)\rfloor$, which corresponds to the below intersection of the dotted purple line denoting $y$ and the line from $f(a)=c$ to $f(b)=d$.
The red and blue lines represent the extremes of strictly monotonic functions. When $f$ is linear, this would immediately find the closest $x$ (which is a big improvement over binary search with the midpoint). Without additional information, I think linearity is the best assumption you can make; there are as many concave functions as convex functions, as well as functions that switch.
One justification for not using gradient descent is if an array $[a,a+1,\dots,b]$ is simply mapped to another array of the same size but different range of data $[c,d]$, which is randomly chosen (and sorted).
Otherwise, you could use Newton's method as suggested by Dave Clarke or gradient descent as suggested by Raphael, which I just learned are different.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/1353
0 comments:
Post a Comment
Let us know your responses and feedback