World's most popular travel blog for travel bloggers.

Why are functional languages Turing complete?

, , No Comments
Problem Detail: 

Perhaps my limited understanding of the subject is incorrect, but this is what I understand so far:

  • Functional programming is based off of Lambda Calculus, formulated by Alonzo Church.

  • Imperative programming is based off of the Turing machine model, made by Alan Turing, Church's student.

  • Lambda calculus is as powerful and able as the Turing Machine,
    meaning they are equivalent in computational power.

If functional programming is based off of Lambda Calculus and not the Turing machine, then why are some (or all) of them described to be Turing complete, and not Lambda complete or something like that? Is the term "Turing completeness" special in any way to Turing machines, or is it just a word?

Lastly, if imperative languages are based off of the Turing Machine, and computers are basically Turing machines, without infinite memory, does that mean they perform better than functional programming languages on our modern PCs?

If that's the case, then what would be the equivalent of a lambda calculus machine?

I know this seems to be 3 separate questions, but they're all closely related, and each is dependent on the previous question being a valid question to begin with.

Asked By : Abdul

Answered By : babou

In a nutshell:

What characterizes imperative programming languages as close to Turing machines and to usual computers such as PCs, (themselves closer to random access machines (RAM) rather than to Turing machine) is the concept of an explicit memory that can be modified to store (intermediate results). It is an automata view of computation, with a concept of a state (comprising both finite state control and memory content) that can change as the computation proceed.

Most other models are more abstract. Though they may express the computation as a succession of transformation steps of an original structure, these transformation are applied in a sort of intemporal universe of mathematical meanings. This may preserve properties, such as referential transparency, that may make mathematical analysis simpler. But it is more remote from natural physical models that rely on the concpet of memory.

Thus there are no natural functional machines, except in a larger sense as explained below, since software is not really separable from hardware.

The reference to Turing as the yardstick of computability comes probably from the fact that his model, the Turing machine was closest to this physical realizability constraint, which made it a more intuitive model of computation.

Further considerations:

There are many models of computation, which were designed to capture in the most general possible way the concept of a computation. They include Turing machines, actually in many different flavors, the lambda calculus (flavors too), semi-Thue rewriting systems, partial recursive function, combinatory logic.

They all capture some aspects of the various techniques used by mathematicians to express or conduct computations. And most have been used to some extent as the basis of some programming language design (e.g. Snobol for rewriting systems, APL for combinators, Lisp/Scheme for lambda calculus) and can often be combined in diverse ways in modern programming languages.

One major result is that all these computation models were proved equivalent, which lead to the Church-Turing thesis that no physically realizable models of computation can do more than any of these models. A model of computation is said Turing complete if it can be proved to be equivalent to one of these models, hence equivalent to all of them.

The name could have been different. The choice of the Turing machine (TM) as the reference is probably due to the fact that it is probably the simplest of these models, mimicking closely (though simplistically) the way a human computes and fairly easy to implement (in a limited finite form) as a physical device, to such an extent that Turing machines have been constructed with Lego sets. The basic idea requires no mathematical sophistication. It is probably the simplicity and realizability of the model that gave it this reference position.

At the time Alan Turing created his computing device, other proposals were on the table to serve as formal definition of computability, a crucial issue for the foundations of mathematics (see Entscheidungsproblem). The Turing proposal was considered by the experts of the time as the one most convincingly encompassing known work on what calculability should be (see Computability and Recursion, R.I. Soare, 1996, see section 3.2). The various proposals were proved equivalent, but Turing's was more convincing. [from comments by Yuval Filmus]

It should be noted that, from a hardware point of view, our computers are not Turing machines, but rather what is called Random Access Machines (RAM), which are also Turing complete.

Purely imperative language (whatever that might mean) are probably the formalisms used for the most basic models, such as Turing machines, or the assembly language (skipping its binary coding) of computers. Both are notoriously unreadable, and very hard to write significant programs with. Actually, even assembly language has some higher level features to ease programming a bit, compared to direct use of machine instructions. Basic imperative models are closed to the physical worlds, but not very usable.

This led quickly to the development of higher level models of computation, which started mixing to it a variety of computational techniques, such as subprogram and function calls, naming of memory location, scoping of names, quantification and dummy variables, already used in some form in mathematics and logic, and even very abstract concepts such as reflection (Lisp 1958).

The classification of programming languages into programming paradigm such as imperative, functional, logic, object oriented is based of the preeminence of some of these techniques in the design of the language, and the presence or absence fo some computing features that enforce some properties for programs or program fragments written in the language.

Some models are convenient for physical machines. Some others are more convenient for a high-level description of algorithms, it that may depend on the type of algorithm that is to be described. Some theoretician even use non deterministic specification of algorithms, and even that cn be translated in more conventional programming terms. But there is no mismatch problem, because we developed a sophisticated compiler/interpreter technology that can translate each model into another as needed (which is also the basis of the Church-Turing thesis).

Now, you should never look at your computer as raw hardware. It does contain boolean circuitry that does very elementary processing. But much of it is driven by micro-programs inside the computer that you never get to know about. Then you have the operating system that makes your machine appear even different from what the hardware does, On top of that you may have a virtual machine that executes byte-code, and then a high-level language such as Pyva and Jathon, or Haskell, or OCaml, that can be compiled into byte code.

At each level you see a different computation model. It is very hard to separate hardware level from the software level thus to assign a specific computational model to a machine. And since they are all intertranslatable, the idea of an ultimate hardware computation model is pretty much an illusion.

The lambda calculus machine does exist: it is a computer that can reduce lambda calculus expressions. Ad that is easily done.

About specialized machine architectures

Actually, complementing Peter Taylor's answer, and following up on hardware/software intertwinning, specialized machines have been produced to be better adapted to a specific paradigm, and had their basic software written in a programming language based on that paradigm.

These include

Fundamentally, these are also imperative hardware structures, but mitigated with special harware features or microprogrammed interpreters to better adapt to the intended paradigm.

Actually, hardware specialized for specific paradigms does not seem to have ever been successful in the long run. The reason is that the compiling technology to implement any paradigm on vanilla hardware became more and more effective, so that specialized hardware was not so much needed. In addition, harware performance was fast improving, but the cost of improvement (including evolution of basic software) was more easily amortized on vanilla hardware than on specialized hardware. Specialized hardware could not compete in the long run.

Nevertheless, and though I have no precise data on this, I would suspect that these ventures left some ideas that did influence the evolution of machines, memories, and instruction sets architecture.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/44305

0 comments:

Post a Comment

Let us know your responses and feedback