World's most popular travel blog for travel bloggers.

[Solved]: Why is least fixed point (lfp) important in program analysis

, , No Comments
Problem Detail: 

I am trying to get a big picture on the importance of least fixed point (lfp) in program analysis. For instance abstract interpretation seems to use the existence of lfp. Many research papers on program analysis also focus heavily on finding least fixed point.

More specifically, this article in wikipedia : Knaster-Tarski Theorem mentions that lfp are used to define program semantics.

Why is it important? Any simple example helps me. (I am trying to get a big picture).

EDIT

I think my wording is incorrect. I donot challenge the importance of lfp. My exact question (beginner) is: How does computing lfp help in program analysis? For e.g., why/how abstract interpretation uses lfp? what happens if there is no lfp in the abstract domain?

Hopefully my question is more concrete now.

Asked By : Ram

Answered By : Andrej Bauer

Any form of recursion or iteration in programming is actually a fixed point. For instance, a while loop is characterized by the equation

while b do c done  ≡  if b then (c ; while b do c done) 

which is to say that while b do c done is a solution W of the equation

W  ≡  Φ(W) 

where Φ(x) ≡ if b then (c ; x). But what if Φ has many fixed points? Which one corresponds to the while loop? One of the basic insights of programming semantics is that it is the least fixed point.

Let us take a simple example, this time recursion. I will use Haskell. The recursive function f defined by

f :: a -> a f x = f x 

is the everywhere undefined function, because it just runs forever. We can rewrite this definition in a more unusual way (but it still works in Haskell) as

f :: a -> a f = f 

So f is a fixed point of the identity function:

f ≡ id f 

But every function is a fixed point of id. Under the usual domain-theoretic ordering, "undefined" is the least element. And indeed, our function f is the everywhere undefined function.

Added on request: in the comments OP asked about the partial order for semantics while loops (you presumed it was a lattice but it need not be). A more general question is what is the domain-theoretic interpretation of a procedural language which can manipulate variables and has the basic control structures (conditionals and loops). There are several ways of doing this, depending on what exactly you want to capture, but to keep things simple, let's assume that we have a fixed number $n$ of global variables $x_1, \ldots, x_n$ that the program can read and update, and nothing else (no I/O or exceptions, or allocation of new variables). In that case a program can be seen as a transformation of the initial state of the variables to the final state, or the undefined value if the program cycles. So, if each variable holds an element of a set $V$, a program will correspond to a mapping $V^n \to V^n \cup \{\bot\}$: for every initial configuration $(v_1, \ldots, v_n) \in V^n$ of the variables, the program will either diverge and yield $\bot$, or it will terminate and produce the final state, which is an element of $V^n$. The set of all maps $V^n \to V^n \cup \{\bot\}$ is a domain:

  • we use the flat ordering on $V^n \cup \{\bot\}$ which has $\bot$ at the bottom and all the elements of $V^n$ "flat" above it, and then $V^n \to V^n \cup \{\bot\}$ is ordered pointwise,
  • the least element is the function which always maps to $\bot$, corresponding to the program while true do skip done (and many others),
  • every increasing sequence has a supremum

Just to give you an idea of how this works, the semantics of the program

x_1 := e 

would be the function which takes as input $(v_1, \ldots, v_n) \in V^n$, calculates the value $v_e$ of the expression e in state $(v_1, \ldots, v_n)$, and returns $(v_e, v_2, \ldots, v_n)$.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/46881

3.2K people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback