Why is it that Iterative-deepening A* is optimal, even without monotonicity? How can I be sure that the first goal reached is the optimal one?
Asked By : Lieven Cardoen
Answered By : Carlos Linares López
In a nutshell: IDA* traverses all paths of lengths iteratively larger and it stops only when expanding (as opposed to generating) the goal node with a cost such that all paths of an inferior cost have been already examined. Monotonicity (or consistency) is not required but admissibility of the heuristic function is a must if optimal solutions are required.
Now, how can we achieve that? It is simple. Get rid of the condition of monotonicity and pay attention instead to the condition of admissibility, i.e., $h(n)\leq h^*(n)$ where $h(n)$ is the heuristic estimate to reach the goal $t$ from node $n$ and $h^*(n)$ is the optimal cost of reaching the target. In other words, all that is required is that we have a function (i.e., $h(n)$) which provides a lower bound on the cost-to-go.
In the following bear in mind that IDA$^*$ is nothing more and nothing less than a depth-first search algorithm which prunes children that exceed a particular $f(n)$ value. As in the case of A$^*$, $f(n)=g(n)+h(n)$ where $g(n)$ is the cost of the path from the start state $s$ to $n$. All we have to do is to decide how to handle $f(n)$ while achieving our goal ...
Step 1: Since we have a lower bound to reach the goal, start IDA$^*$ with a threshold equal to $h(s)$, the heuristic estimate of the start state. This simply amounts to traverse all paths stemming from $s$ until a node $n$ is found such that $f(n) > h(s)$. This operation is necessarily finite in locally finite graphs (i.e., with bounded branching factor). While generating paths, note that you stop at those nodes $n$ that meet the aforementioned condition so that you can easily save the minimum excess over $h(s)$
Step 2: Once the first step is over, you know: first, no path of length $h(s)$ gets to the target; second, the next cheaper paths that can be traversed from $s$ have a cost equal to the minimum excess computed in the previous iteration. This is all you need. Proceed now as in Step 1 but using the minimum excess computed in the previous iteration as the threshold. If the iteration ended withouth success apply iteratively until getting to the solution.
Notice the remark I made in the first paragraph: from the behaviour of this algorithm, admissibility is guaranteed if you stop when expanding the goal node. This means that all paths of an inferior cost have been already traversed. If you ever stop when generating the goal, then you might incurr in suboptimal solutions since those paths might have a cost that exceeds the current threshold of that iteration.
In no place, consistency is required. From Steps 1 and 2, a proof of correctness can be easily derived by induction on the number of iterations.
Hope this helps,
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/16477
0 comments:
Post a Comment
Let us know your responses and feedback