As a still-learning programmer, can someone point me towards the ELIF (Explain Like I'm Five) version of this? Some of the first questions that enter my mind after reading this intro --
Why is this relevant? Why do we as programmers want to know about pure functional programming? What are its advantages? What's the point? This seems to add complexity. Etc.
I know there are good answers to these, but this intro has done little more than whet my appetite a bit...
When you say "add complexity", I think you mean "add difficulty". Because for example, a language that has only one-argument functions, and builds multi-arg functions on top is a simpler one than one that has multi-arg functions built-in.
Simplicity is hard -- and pure functional programming is a lot about simplicity, which does make it hard -- hard to learn, and hard to figure out how to tackle various problems.
But the simplicity has many rewards. Mathematical simplicity makes a lot of mathematical deductions tractable -- this is what is typically referred to as "ease of reasoning" about code.
Referential transparency adds a whole lot of ease to the reasoning process.
The pure functional approach also means that type safety goes much further. If you forget to hook up your functions together properly (forget an input, ignore an output) you're likely to get warnings/errors. In the imperative approach, if you forget a side-effecting statement that adds another input to some code, the compiler cannot help you.
Imagine testing -- imperative code requires tests to build up a mock world as input for the code, and then inspect the mock world that results as an output from the code.
Testing a referentially transparent function requires just calling it with differing inputs. It makes property-testing possible, too!
Imagine composition -- pure function composition of two functions composes the meanings of those functions. If those two functions work, so will the composed one.
Try to combine two pieces of imperative code -- that's not necessarily true at all, one might mutate data that's in use by the other. Aliasing issues may arise, etc.
So functional code is easier to reason about, more composable, easier to test, and gets more safety from type checking. It is harder to learn to use, and harder to figure out how to use it to solve certain problems. With time, however, more and more problems yield to the pure functional programming approaches.
I would like to add that recursive calls can be nicer than loops, when multiple conditions or branching are possible in the loop body.
An equivalent implementation with while loops, if conditions, continues, and breaks are usually harder to design correctly than a good recursive function.
Functional programming encourages use of small simple datatypes. Tuples instead of small structs. Lists instead of more complex container types. First-class functions instead of classes.
However, IMO 'purity' is a bit of a buzzword when applied to functional programming. You may disagree, but that's how I feel about it..
I think recursive solutions are often very nice indeed. But at worst, recursive solutions are sometimes worse than structured loops, because you can use recursions very much like a free-style goto, allowing your code to have even less restrictive structure than loops. That's why directly recursive code is usually discouraged and reusing recursive combinators is encouraged.
To resolve this, a nice suggestion is to use the term "denotative programming". That is, programming where types and values have mathematically simple denotations that predictably compose exactly according to the syntactic composition being used.
Tail-recursive calls are essentially gotos, but a little more well behaved (if you try to put state into explicit parameters instead of mutable variables).
> I would like to add that recursive calls can be nicer than loops
Having a good library of higher-order functions that encapsulate common recursion patterns is a lot nicer than doing recursion by hand, or having to read code where someone did recursion by hand and you have to not only figure out how the recursion works, but whether the author did anything wrong.
Higher-order functions are control structures, and we should encourage their use for all the same reasons we encourage the use of while loops in preference to people building their loop structures by hand out of gotos and conditional gotos.
As another nice effect, higher-order functions (or as we call them in their use as control structures: combinators) unify data structure and control structures. E.g. folding over a list is equivalent to various loops, the Monad instance of Maybe represents simple exception handling, trees and their traversal are identifiable in some sense, etc.
At least in lazy, statically typed functional languages, like e.g. Haskell.
As someone who only discovered functional programming this year (and now do almost all my programming in OCaml), I would say that the biggest revelation that I came to (and didn't really appreciate until I'd studied the subject for a while) was the lack of mutability in (pure) functional programming. This allows you to maintain invariants and makes it much easier to reason about your code, which has huge advantages when you're working with concurrency or parallelism.
Here's one of my favorite examples. Say we have a set S and an element x, as in the following code:
"let S = Set.empty () in
let S = Set.add S x in
f ();
if (Set.mem S x) then..."
If our code is purely functional, then it's impossible that x is no longer in S after our call "f ()". However, if we're using mutability, then maybe in our function f, we remove x from S. If that's the case, we really can't be sure what will happen when we reach our if-statement.
This is certainly one of my favorite properties of functional programming: there is simply less to worry about. Everything is either local or explicit--distant parts of my code cannot implicitly affect each other.
However, for me, the real realization was this combined with the fact that these languages also manage to be more expressive than the imperative ones I was using before. This might seem a bit counter-intuitive--after all, it feels like they give you less power, not more--but it has been the main draw to FP for me.
So the code is simpler, there is less to worry about and it's more expressive. What not to like?
Without understanding functional programming, you can't invent
MapReduce, the algorithm that makes Google so massively scalable.
The terms Map and Reduce come from Lisp and functional
programming. MapReduce is, in retrospect, obvious to anyone who
remembers from their 6.001-equivalent programming class that
purely functional programs have no side effects and are thus trivially
parallelizable.
Different languages are good at different things. Some are good for pattern matching, others are great for describing algorithms, others are fit well into a specific OS ecosystem, others are good at manipulating specific data structures (such as vectors, matrices, graphs, etc.), and so on.
The most powerful part about pure functional programming is that you get more powerful functions. Because your functions don't change the state of data, you can confidently combine them into new functions, producing a sort of chain effect. This can lead to very concise programs since there is no need to hold intermediate variables (i, count, temp, ...) in some place. What you sometimes end up with are one liners that -- if the functions are named well enough -- explain the entire logic of the program.
There are certain problems that lend themselves well to the functional style, mainly those where you are piping a piece of data through various functions and applying different transformations to it. It's worth learning at least a couple of functional languages so you're not applying a hammer where you need a wrench.
> There are certain problems that lend themselves well to the functional style, mainly those where you are piping a piece of data through various functions and applying different transformations to it.
I don't quite understand this - how does this fit into the 'immutability' of fp? So functions can mutate data that goes into them, but they can't maintain internal state?
In FP functions shouldn't mutate data structures that were passed by reference on input, instead any input data should be copied before performing mutations on it to avoid changes to external state.
This is how e.g. Array.prototype.filter() works in JavaScript:
var numbers = [1,2,3,4,5,6,7,8,9,10];
var evenNumbers = numbers.filter( function(number) {
return number % 2 === 0;
});
I would like to add that this it is perfectly possible to write purely functional programs that do mutation. You just need to be explicit about it.
Many purely functional data structures also have advanced implementations that need much less copying than "immutable" would at first make you think, since two values can share unchanged structure.
In f# for example you can write:
let result = list1
|> List.filter (fun e -> ...)
|> List.map (fun e -> ...)
|> ....
no mutation, but you are piping the intermediate result to the next operation. In c# you could achieve a similar result by using the dot operator instead of the backpiping |> one.
Why is this relevant? Why do we as programmers want to know about pure functional programming? What are its advantages? What's the point? This seems to add complexity. Etc.
I know there are good answers to these, but this intro has done little more than whet my appetite a bit...