World's most popular travel blog for travel bloggers.

[Solved]: What is a brief but complete explanation of a pure/dependent type system?

, , No Comments
Problem Detail: 

If something is simple, then it should be completely explainable with a few words. This can be done for the λ-calculus:

The λ-calculus is a syntactical grammar (basically, a structure) with a reduction rule (which means a search/replace procedure is repeatedly applied to every occurrence of a specific pattern until no such pattern exists).

Grammar:

Term = (Term Term) | (λ Var . Term) | Var 

Reduction rule:

((λ var body) term) -> SUBS(body,var,term)     where `SUBS` replaces all occurrences of `var`     by `term` in `body`, avoiding name capture. 

Examples:

(λ a . a)                             -> (λ a a) ((λ a . (λ b . (b a))) (λ x . x))     -> (λ b . (b (λ x x))) ((λ a . (a a)) (λ x . x))             -> (λ x . x) ((λ a . (λ b . ((b a) a))) (λ x . x)) -> (λ b . ((b (λ x . x)) (λ x . x))) ((λ x . (x x)) (λ x . (x x)))         -> never halts 

While somewhat informal, one could argue this is informative enough for a normal human to understand the λ-calculus as a whole - and it takes 22 lines of markdown. I'm trying to understand pure/dependent type systems as used by Idris/Agda and similar projects, but the briefer explanation I could find was Simply Easy - a great paper, but that seems to assume a lot of previous knowledge (Haskell, inductive definitions) that I don't have. I think something briefer, less rich could eliminate some of those barriers. Thus,

Is it possible to give a brief, complete explanation of pure/dependent type systems, in the same format I presented the λ-calculus above?

Asked By : MaiaVictor

Answered By : pigworker

Let's have a go. I'll not bother about Girard's paradox, because it distracts from the central ideas. I will need to introduce some presentational machinery about judgments and derivations and such.

Grammar

Term   ::=   (Elim)   |   *   |   (Var:Term)→Term   |   λVar↦Term

Elim   ::=   Term:Term   |   Var   |   Elim Term

The grammar has two mutually defined forms, "terms" which are the general notion of thing (types are things, values are things), including * (the type of types), dependent function types, and lambda-abstractions, but also embedding "eliminations" (i.e. "usages" rather than "constructions"), which are nested applications where the thing ultimately in the function position is either a variable or a term annotated with its type.

Reduction Rules

(λy↦t : (x:S)→T) s ↝ t[s:S / y] : T[s:S / x]

(t : T) ↝ t

The substitution operation t[e / x] replaces every occurrence of the variable x with the elimination e, avoiding name capture. To form an application that can reduce, a lambda term must be annotated by its type to make an elimination. The type annotation gives the lambda-abstraction a kind of "reactivity", allowing application to proceed. Once we reach the point where no more application is happening and the active t : T is being embedded back into the term syntax, we can drop the type annotation.

Let's extend the ↝ reduction relation by structural closure: the rules apply anywhere inside terms and eliminations that you can find something matching the left-hand side. Write ↝* for the reflexive-transitive (0-or-more-step) closure of ↝. The resulting reduction system is confluent in this sense:

If s ↝* p and s ↝* q, then there exists some r such that p ↝* r and q ↝* r.

Contexts

Context   ::=     |   Context, Var : Term

Contexts are sequences which assign types to variables, growing on the right, which we think of as the "local" end, telling us about the most recently bound variables. An important property of contexts is that it's always possible to choose a variable not already used in the context. We maintain the invariant that the variables ascribed types in the context are distinct.

Judgments

Judgment   ::=   Context ⊢ Term has Term   |   Context ⊢ Elim is Term

That's the grammar of judgments, but how to read them? For a start, ⊢ is the tradional "turnstile" symbol, separating assumptions from conclusions: you can read it informally as "says".

G ⊢ T has t

means that given context G, type T admits term t;

G ⊢ e is S

means that given context G, elimination e is given type S.

Judgments have interesting structure: zero or more inputs, one or more subject, zero or more outputs.

INPUTS                   SUBJECT        OUTPUTS Context |- Term   has    Term Context |-               Elim      is   Term 

That is, we must propose the types of terms in advance and just check them, but we synthesize the types of eliminations.

Typing Rules

I present these in a vaguely Prolog style, writing J -: P1; ...; Pn to indicate that judgment J holds if premises P1 through Pn also hold. A premise will be another judgment, or a claim about reduction.

Terms

G ⊢ T has t   -:   T ↝ R;   G ⊢ R has t

G ⊢ * has *

G ⊢ * has (x:S)→T   -:   G ⊢ * has S;   G, z:S !- * has T[z / x]

G ⊢ (x:S)→T has λy↦t   -:   G, z:S ⊢ T[z/x] has t[z/y]

G ⊢ T has (e)   -:   G ⊢ e is T

Eliminations

G ⊢ e is R   -:   G ⊢ e is S;   S ↝ R

G, x:S, G' ⊢ x is S

G ⊢ f s is T[s:S / x]   -:   G ⊢ f is (x:S)→T;   G ⊢ S has s

And that's it!

Only two rules are not syntax-directed: the rule which says "you can reduce a type before you use it to check a term" and the rule which says "you can reduce a type after you've synthesized it from an elimination". One viable strategy is to reduce types until you've exposed the topmost constructor.

This system is not strongly normalizing (because of Girard's Paradox, which is a liar-style paradox of self-reference), but it can be made strongly normalizing by splitting * into "universe levels" where any values which involve types at lower levels themselves have types at higher levels, preventing self-reference.

This system does, however, have the property of type preservation, in this sense.

If G ⊢ T has t and G ↝* D and T ↝* R and t ↝ r, then D ⊢ R has r.

If G ⊢ e is S and G ↝* D and e ↝ f, then there exists R such that S ↝* R and D ⊢ f is R.

Contexts can compute by allowing the terms they contain to compute. That is, if a judgment is valid now, you can compute its inputs as much as you like and its subject one step, and then it will be possible to compute its outputs somehow to make sure the resulting judgment stays valid. The proof is a simple induction on typing derivations, given the confluence of -->*.

Of course, I've presented only the functional core here, but extensions can be quite modular. Here are pairs.

Term   ::=   ...   |   (x:S)*T   |   s,t

Elim   ::=   ...   |   e.head   |   e.tail

(s,t : (x:S)*T).head ↝ s:S

(s,t : (x:S)*T).tail ↝ t:T[s:S / x]

G ⊢ * has (x:S)*T   -:   G ⊢ * has S;   G, z:S ⊢ * has T[z / x]

G ⊢ (x:S)*T has s,t   -:   G ⊢ S has s;   G ⊢ T[s:S / x] has t

G ⊢ e.head is S   -:   G ⊢ e is (x:S)*T

G ⊢ e.tail is T[e.head / x]   -:   G ⊢ e is (x:S)*T

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/49888

 Ask a Question

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback