World's most popular travel blog for travel bloggers.

Intro to Martin-Löf type theory

, , No Comments
Problem Detail: 

What would be the best introduction to Per Martin-Löfs ideas about type theory? I've looked at some lectures from the Oregon PL summer school, but I'm still sort of puzzled by the following question:

What is a type?

I know what a set is, since you can define them by the usual ZF axioms and they have a very intuitive concrete model; just think of a basket filled with stuff. However, I've yet to see a reasonable definition of a type and I was wondering if there is some source that would distill this idea for dummy.

Asked By : dst

Answered By : Gilles

A type is a property of computations. It's what you write on the right-hand side of a colon.

Let me elaborate on that. Note that the terminology isn't completely standard: some articles or books may use different words for certain concepts.

A term is an element of an abstract syntax that is intended to represent computation. Intuitively, it's a parse tree. Formally, it's a finite tree where the nodes belong to some alphabet. An untyped calculus defines a syntax for terms. For example, the (untyped) lambda calculus contains terms (written $M$, $N$, etc.) built from three types of nodes:

  • variables, of arity 0 (a denumerable collection thereof), written $x$, $y$, etc.;
  • application of a variable, of arity 1 (a denumerable collection thereof, with a bijection to variables), written $\lambda x. M$, etc.;
  • application, of arity 2, written $M \, N$.

A term is a syntactic construction. A semantics relates terms to computations. There are many types of semantics, the most common being operational (describing how terms can be transformed into other terms) or denotational (describing terms by a transformation into another space, usually built from set theory).

A type is a property of terms. A type system for an untyped calculus describes which terms have which types. Mathematically, at the core, a type system is a relation between terms and types. More accurately, a type system is a family of such relations, indexed by contexts — typically, a context provides at least types for variables (i.e. a context is a partial function from variables to types), such that a term may only have a type in contexts that provide a type for all its free variables. What kind of mathematical object a type is depends on the type system.

Some type systems are described with types as sets, using notions of set theory such as intersection, union and comprehension. This has the advantage of resting upon familiar mathematical foundations. A limitation of this approach is that it doesn't allow reasoning about equivalent types.

Many type systems describe types themselves as terms in a calculus of types. Depending on the type system, these may be the same terms or different terms. I'll use the phrase base term to refer to a term of the calculus that describes computation. For example, the simply typed lambda calculus uses the following calculus of types (written $\tau$, etc.):

  • base types, of arity 0 (a finite or denumerable collection thereof), written $A$, $B$, etc.;
  • function, of arity 2, written $\tau_0 \rightarrow \tau_1$.

The relation between terms and types that defines the simply typed lambda calculus is usually defined by typing rules. Typing rules are not the only way to define a type system, but they are common. They work well for compositional type systems, i.e. type systems where the type(s) of a term is built from the types of subterms. Typing rules define a type system inductively: each typing rule is an axiom that states that for any instantiation of the formulas above the horizontal rule, the formula below the rule is also true. See How to read typing rules? for more details. Does there exist a Turing complete typed lambda calculus? may also be of interest.

For the simply typed lambda calculus, the typing judgement $\Gamma \vdash M : \tau$ means that $M$ has the type $\tau$ in the context $\Gamma$. I've omitted the formal definition of contexts. $$ \dfrac{x:\tau \in \Gamma}{\Gamma \vdash x : \tau}(\Gamma) \qquad \dfrac{\Gamma, x:\tau_0 \vdash M : \tau_1}{\Gamma \vdash \lambda x.M : \tau_0 \rightarrow \tau_1}(\mathord{\rightarrow}I) \qquad \dfrac{\Gamma \vdash M : \tau_0 \rightarrow \tau_1 \quad \Gamma \vdash N : \tau_0}{\Gamma \vdash M\,N : \tau_1}(\mathord{\rightarrow}E) $$

For example, if $A$ and $B$ are based types, then $\lambda x. \lambda y. x\,y$ has the type $(A \rightarrow B) \rightarrow A \rightarrow B$ in any context (from bottom to top, apply $(\mathord{\rightarrow}I)$ twice, then $(\mathord{\rightarrow}E)$, and finally $(\Gamma)$ on each branch).

It is possible to interpret the types of the simply typed lambda calculus as sets. This amounts to giving a denotational semantics for the types. A good denotational semantics for the base terms would assign to each base term a member of the denotation of all of its types.

Intuitionistic type theory (also known as Martin-Löf type theory) is more complex that simply typed lambda calculus, as it has many more elements in the calculus of types (and also adds a few constants to the base terms). But the core principles are the same. An important feature of Martin-Löf type theory is that types can contain base terms (they are dependent types): the universe of base terms and the universe of types are the same, though they can be distinguished by simple syntactic rules (usually known as sorting, i.e. assigning sorts to terms, in rewriting theory).

There are type systems that go further and completely mix types and base terms, so that there is no distinction between the two. Such type systems are said to be higher-order. In such calculi, types have types — a type can appear on the left-hand side of the $:$. The calculus of construction is the paradigm of higher-order dependent types. The lambda cube (also known as Barendregt cube) classifies type systems in terms of whether they allow terms to depend on types (polymorphism — some base terms contain types as subterms), types to depend on terms (dependent types), or types to depend on types (type operators — the calculus of types has a notion of computation).

Most type systems have been given set-theoretical semantics, to tie them with the usual foundations of mathematics. How are programming languages and foundations of mathematics related? and What is the difference between the semantic and syntactic views of function types? may be of interest here. There has also been work on using type theory as a foundation of mathematics — set theory is the historic foundation, but it is not the only possible choice. Homotopy type theory is an important milestone in this direction: it describes the semantics of intentional intuitionistic type theory in terms of homotopy theory and constructs set theory in this framework.

I recommend Benjamin Pierce's books Types and Programming Languages and Advances Topics in Types and Programming Languages. They are accessible to any undergraduate with no prerequisite other than basic familiarity with formal mathematical reasoning. TAPL describes many type systems; dependent types are the subject of chapter 2 of ATTAPL.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/14674

0 comments:

Post a Comment

Let us know your responses and feedback