A YouTube video I was watching explained the differences between Imperative and Functional programming by demonstrating how the numbers from 1
to 10
are summed up in Java and in Haskell respectively.
In Java, you must explicitly state each step and assign the result of each step to a variable - something like the following
int total = 0; for (int i = 1; i <= 10; i++){ total = total + i; } return total;
In Haskell, you can simply say:
sum(1..10)
My question is: There obviously is something going on in the background of a Functional language, and that something must be some sort of Imperative process. It seems like Functional Languages are really just some sort of Imperative-Language APIs. For example, I can create part of a functional language by defining a method sum(int start, int end)
in Java. Did I really create a new type of language right there, or did I just define a set of Imperative method calls that hide imperative instructions from you?
I hope it's clear what I am struggling to understand.
Asked By : CodyBugstein
Answered By : Guy Coder
If we peel off the syntactic sugar on the front and the code generation on the back and compare what happens in between when converting source to running code for imperative languages, such as C or Java with functional languages such as ML or OCaml we will generally find the following differences in what, why, and how.
Mutable vs. immutable
With functional programming one tends to use immutable values which means that we don't have to worry if a value changes by a means external to our current function. When used correctly this removes any problems related to side effects.
Focus: Data versus function.
When one thinks of coding in imperative one first thinks of data structures and then what methods they need, when working with functional programming one first thinks of what functions are needed and then makes the data types needed. Most of the data types are either lazy list, (think stream or infinite list) or discriminated unions. When the discriminated union is recursive you instantly have created a tree or graph without having to write all of walking code.
Generics/Parametric polymorphism
This one is interesting, and if I have my facts correct, was invented with functional programming and then transplanted to imperative programming. So if you like generics thank the functional language designers.
Referential transparency/Parallelization
Because of referential transparency functional code can be more easily ported to parallel computing.
Higher order function/compositionality
Since functions can create new functions and return functions, creating new functions is based on other functions is as easy as creating new expressions instead of writing entire new methods. This leads to morphisms which are very useful if the problem you are solving can be expessed with math. Doing set transformations, think SQL and updates, is so much easier with functional programming. As Wandering Logic noted this is where functional programming languages excel.
Typing: Static versus inference.
Since the types are inferred as opposed being set by the programmer during writing, more checks can be to ensure the correctness of the code and often the functions will be made generic as opposed to being of a set type.
Pattern Matching vs switch statement
When combining matching with discriminated unions, your matching is checked to ensure you have covered every outcome. How many times have you had a run time error because you missed a case with a switch statement.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/18570
0 comments:
Post a Comment
Let us know your responses and feedback