Why functional programming?

The canonical answer to that question is probably “Why functional programming matters“, but here’s a specific example that makes the case nicely.

Neil Mitchell is working on Supero, an optimizing compiler for Haskell which includes some ideas from supercompilation. But that’s not important right now.

What is important is the technique Mitchell uses in the blog post at the second link above. Algebra. As in junior-high-Algebra-I-algebra. The old “if a = b then you can substitute either one for the other” kind of Algebra.

Functional programming is based on defining functions.

  def square(i: Int) = i * i

In pure functional programming, where side-effects are not allowed (or at least kept securely behind bars), you can treat that definition as an equation; wherever you see square(foo) you can rewrite it as (foo * foo) and vice versa. This applies even when foo is an arbitrarily complex expression (don’t worry about operator precedence or extra parentheses right now; that’s not the point).

The point is that you can’t do that with confidence for impure (i.e., having side-effects) languages. Consider trying that replacement in something as simple as…

  n = square(++i);

…which clearly is not the same as…

  n = ++i * ++i;

…at least in most cases. Are there any special cases where it’s the same? Would the answer be different if the enclosed expression had been i++ instead?

If you even had to think about it for one millisecond (or, like me, thought “That’s ugly! I’ll think about it later!”) that’s exactly the point. In a pure language the question can’t even come up. You’re free to do the rewriting, either way, any time.

(Fair warning: You don’t need to know Haskell to read Mitchell’s post, but it helps. Even if you don’t, it’s worth the little bit of time it takes to work through his example. The cheat sheet at the bottom of this post is a quick introduction for non-Haskell-speaking readers, or non-Haskell-reading speakers.)

Mitchell takes a simple task, breaking a string into a list of words, and shows a simple and obvious function for the task. However, that simple and obvious function ends up with redundant “is it a space?” tests against some characters. Mitchell does a bit of refactoring, using simple substitute-equals-for-equals steps, to track down and eliminate the redundance, resulting in an equivalent (still simple) function that runs faster.

All of this is based on the facts that:

  • more powerful functions can be composed out of simpler ones, and
  • pure functions ensure that the obvious substitutions work.

For me this makes a powerful demonstration of the benefits of the functional approach, whether I’m using Haskell, Scala, or even a non-FP language such as Java. (Not only does functional programming matter, knowing why functional programming matters matters as well.)


Cheat sheet

For newcomers to Haskell notation, here’s a bit of explanation. Please don’t hold the length (or any shortcomings) of my explanation against Haskell. I’m assuming a reader who’s never seen Haskell before (or any similar language), and so say something about almost every token. Feel free to skip the obvious, with my apologies.

Mitchell’s analysis works on this original function…

  words :: String -> [String]
  words string = case dropWhile isSpace string of
                     [] -> []
                     s -> word : words rest
                         where (word, rest) = break isSpace s

…which breaks a string into a list of words (strings).

The special-character soup isn’t really that bad, if you use your Captain Zowie decoder ring. It helps to know three facts:

  • indentation shows nested structure, instead of using curly braces,
  • function application doesn’t require parentheses, and
  • a Haskell String can be considered as a list of characters.

The first line…

  words :: String -> [String]

…declares the type of words as function from string to list of strings. Specifically:

  • the double-colon can be read “has type”,
  • the arrow indicates a function taking the type on the left, yielding the type on the right, and
  • the square brackets on the right read “list of”.

The second line…

  words string = case dropWhile isSpace string of

…starts the definition of words, using a parameter named string (which we already know is of type String from the first line). The phrase…

  dropWhile isSpace string

…returns a left-trimmed version of string. It applies isSpace to successive elements of string, repeating as long as the result is true, and returns what’s left (either because there’s nothing left or because a character is found for which isSpace is false). It does so without modifying the original string, just as String#substring does in Java. That left-trimmed (sub-)string is then subjected to a case analysis, by way of pattern matching.

Remember that the result of left-trimming was produced either because there were no characters left, or because a non-space character was found. Therefore, there will be two cases to consider. The line reading…

  [] -> []

… reads “if you have an empty list (of characters) then return an empty list (of Strings)”. We (and the Haskell compiler!) can infer the parenthesized types because we know that trimming a string produces the String (list of characters) being inspected, and the type declaration on the first line says that the function produces a list of String.

The last two lines address the case in which there actually were some characters left after trimming.

  s -> word : words rest
      where (word, rest) = break isSpace s

For a non-empty string s… Wait! What’s s and how do we know it’s a non-empty string? Remember that we’re in a case (glance back at the original above to see the indentation) and we’ve already eliminated the possibility that the trimmed string we’re examining is empty. So the value in hand must be non-empty. And pattern matching give us the ability to name things on-the-fly, more or less the equivalent of defining a local variable in e.g. Java. So, as I was saying before I so rudely interrupted myself…

For a non-empty string s, the result is a word followed by the words from the rest of the string, where those two things (word and rest) are constructed by breaking s at the first space (the first character for which the function isSpace returns true). The single-colon is often read as “cons”, in homage to Lisp. It constructs a list (of some type) by gluing a value (of the correct type) onto the beginning of a list (of the same type). Becuase word is a String and words rest is a list of String, the entire expression word : words rest is also a list of String.

Notice that both dropWhile and break have two arguments: another function (isSpace in both uses here) and a list. This works because dropWhile and break are higher-order functions, which take functions as arguments and use them whenever needed. If you cross your eyes and squint a little, this begins to look like design patterns in OO.


Advertisements
Trackbacks are closed, but you can post a comment.

Comments

  • augustss  On 2008-03-27 at 09:27

    The “and vice versa” is only true in general for non-strict languages.

  • Pete  On 2008-03-27 at 09:55

    Thanks, this clears up much of the mystery around functional programming for me. I’ve read many such articles, but they rarely bother to go through the basics or describe the punctuation, instead presented code that is virtually impossible to read unless you already knew enough not to need the article — thanks for breaking the trend!

    I see two problems, though:

    1. There’s a huge reliance on cryptic punctuation in many of these functional languages. That really hurts the readability of the code, in my opinion.

    2. Allowing you to use functions so flexibly would seem to make it much harder to write well-named functions. For example, “dropWhile isSpace string” makes sense to me, but “break isSpace s” is somewhat confusing. I’d gladly sacrifice some of the flexibility (which is effectively how easy it is to write) to make my code easier to read.

    Are there any functional programming languages that might get around those issues to some extent? I’m obsessed with readability right now and don’t mind a little extra typing if it makes sense (think Python over Ruby, in terms of philosophy — explicit rather than concise).

  • joelneely  On 2008-03-27 at 20:07

    @Pete: Thanks for the kind words! I was amused that the “footnote” ended up longer than the main post, but if it helped clarify the point then it was worth it.

    With respect to your numbered points:

    1. Too verbose can be as bad as too symbolic (think COBOL ;-). I think that familiarity plays a big role as well. The first time I saw C (back in the days of Algol, Fortran, and Pascal) I found its notation strange, but these days everybody takes its cryptic symbols ({, }, ++, --, ?:, etc.) for granted. And let’s not even talk about some of the C++ idioms!

    2. Readability is certainly a crucial issue; code is read many more times than written. A name such as breakOnFirst might have more obvious at first glance than break, but functional language designers often seem to share with Unix users a dread of Carpal Tunnel Syndrome.

    Every functional language that I’ve seen (and I certainly haven’t seen them all!) has bits of notation that take a bit of getting-used-to. But that’s not surprising, given that they’re trying to express different ideas than conventional imperative or OO language. As functional languages continue to get more visibility, I suspect we’ll all become more used to a variety of notations.

    Finally, hybrid languages such as Scala may address at least part of your concerns; beginning with more familiar notation and gradually adopting more of the “interesting” language features may be easier than jumping into the deep end with Haskell or OCaml.

  • joelneely  On 2008-03-27 at 20:14

    @augustss: Thanks for the feedback. I was trying to focus on the presence/absence of side effects, and side-stepped parameter-passing mechanisms in the interest of brevity.

  • Wayne  On 2008-03-28 at 23:48

    I was reading this blog, then the comments, finally noticed the last comment was from joelneely, and realized I know you. Haha. The probability of that occurring when you’re browsing dzone and reading random blogs is actually pretty high when you know a bunch of people and many of them blog. 😉

    Maybe you can incorporate some functional programming content into your presentation in May… if you don’t have enough already just talking about JPR, that is.

  • joelneely  On 2008-03-29 at 05:51

    @Wayne: Actually, FP was a big topic at JPR this year, so it would be hard to leave it out! 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: