Category Archives: Mathematics

On closed range boundaries

Programmers often need to talk about ranges of values (whether or not the language at hand supports the concept explicitly). For example, given the Java array

String[] streetLines = new String[3];

an index i must satisfy

0 ≤ i < 3 (in normal Mathematical notation)

or

0 <= i && i < 3 (in Java notation)

to be a valid index on streetLines. But how does one read those expressions aloud?

I learned to read <= and as “at most” and >= and as “at least” from Dijkstra. Those readings have at least two benefits:

  1. They are quite natural. For example, the assertion

    there are at least six eggs left in the carton

    is at least as obvious in normal speech as the equivalent phrase

    there are six or more eggs in the carton

    and is probably much more obvous than other equivalent phrases, such as

    the number of eggs left in the carton is greater than or equal to six

    or

    there are more than five eggs left in the carton

    even though we’ve seen the equivalent of those in code many times.
     

  2. They avoid introducing extraneous logical operators. The phrasing

    less than or equal to

    seems more complex than

    at most

    because it implies the use of disjunction (logical “or“) in its evaluation. The other equivalent phrase

    not greater than

    introduces negation into a simple comparison.

As a tiny “easter egg”, look for uses of “at least” en passant in the preceding few sentences! If they went by without being obvious, perhaps that serves as evidence of point 1 above. 😉

“Linguistics” vs. Mathematics?

I happened across an interesting post on Chris Okasaki’s blog, titled Less than vs Greater than. Let me suggest that you read it before continuing here.

I would paraphrase his point about errors he observed in students’ programs as follows:

A student who writes an expression such as

expL < expR

often appears to lock on the concept of “something being smaller” and then become mentally stuck on “dealing with smaller stuff”, even if that is not appropriate for the meaning of expL and expR at that point in the program.

He illustrates his point very nicely with a flawed binary search.

For quite some time I’ve tended to write expressions of the form

lowerBound <= someVar && someVar < upperBound

to express that someVar is within the half-open range

[lowerBound..upperBound)

e.g. array subscripts that must be at least zero, but less than the length of the array. The visual hint of placing the variable textually between the limiting values seemed nicely mnemonic to me. (From there, it has also seemed natural to experiment with preferring left-to-right, lesser-to-greater ordering consistently in comparison expressions, although I’m not suggesting that as a universal coding convention).

However, I was quite surprised by the first comment, which described as “linguistically wrong” the common C-language-based idiom

if (5 == i)

(intended to catch typos involving only a single equal sign). The commenter said

Linguistically that’s wrong. You’re not testing 5 to see if it’s equal to i, you’re testing i.

which implies an asymmetrical interpretation of equality as being a “test” on its left-hand value!

By ignoring the fact that == is simply an assertion that its operands are equal, that interpretation fails on many completely valid cases, such as

(a + 1 == b - 1)

and the first clause

lowerBound <= someVar

of the “within bounds” cliché above.

That misinterpretation of == seems consistent with a variety of incorrect or awkward uses of boolean expressions widely seen. Many of us have probably seen code (or read blog posts about code) resembling:

boolean fooIsEmpty;
...
if (foo == null) {
    fooIsEmpty = true;
} else {
    if (foo.length() == 0) {
        fooIsEmpty = true;
    } else {
        fooIsEmpty = false;
    }
}

I suspect two culprits behind this kind of abuse:

  1. Failure to include the right kind and amount of Mathematics in the education of a programmer, and
  2. The C/FORTRAN use of = for assignment.

With respect to point 1, I believe that Boolean Algebra is simply a fundamental skill for programming. The ability to manipulate and understand boolean expressions equally important as the ability to deal with numeric expressions for most programming of the kind I see daily. It is understandable that a programmer whose training never addressed boolean expressions except in the context of if() or while() would be uncomfortable with other uses, but that’s a problem to be solved, not a permanent condition to be endured.

Regarding point 2, much of what I’ve read or experienced supports the idea that FORTRAN—and later, C—were more commonly used in the US than Algol or other alternatives due to political, cultural, and commercial issues rather than technical ones. (I recommend Dijkstra’s “A new science, from birth to maturity” and Gabriel’s “Worse is Better” as good starting points.)

Whether that conclusion is valid or flawed, the decision to use = for the (asymmetrical!) “assignment” operation immediately creates two new problems:

  1. The need to express the “equality test” in a way that won’t be confused with assignment; and
  2. The risk that users of the language become confused about the symmetry of equality, due to guilt by association with the “assignment” operator.

Algol—and its descendants, including Pascal—avoided those problems by using the asymmetrical := for assignment. The ! character is used as an operator or suffix in other languages to express destructive value-setting. But Java, C#, and others, have followed the FORTRAN/C convention, with the predictable effects.

Given such far-reaching consequences from a single character in the source code, I’m reminded again how important it is to make good choices in coding style, API design, and other naming contexts.


Postscript:

The left-to-right ordering of the number line has some interesting cultural and even individual baggage. I recall reading about a mathematician whose personal mental model was of negative values being behind him and positive values receding into the distance in front of him.

I believe it was in connection with the Dutch National Flag Problem that Esdger Dijkstra wrote about a student whose native language was written right-to-left. While other students had designed programs with indices increasing from zero, that student had produced an equally-valid program with an index that decreased from the maximal value.

Design by proof

The idea of proving programs correct has been around (and hotly debated) for roughly forty years. The irony is that some proof advocates anticipated by more than thirty years what the agile advocates are now saying:

Designing with verification in mind improves the quality of the resulting design.

To be even more specific, compare these statements:

Design the proof,
then write the code so that
it meets the obligations of the proof.
  Write the test first,
then write the code so that
it passes the test.
Edsger Dijkstra   (paraphrased)   Kent Beck   (paraphrased)

I’ve experienced the benefits of following both pieces of advice. Let me apply this approach to a little bit of design which I’ll then use in the Project Euler series. There’s a small amount of algebra involved, but if you don’t want to bother with that, let me request that you at least look at the last section for the punchline.

Sum thing

Consider the following sums, where i in each case ranges from 1 to n:

Power Sum Expanded form Closed form Factored form
0 ∑ i0 1 + 1 + 1 + … + 1 n1 n
1 ∑ i1 1 + 2 + 3 + … + n 1/2 n2 + 1/2 n1 n(n+1)/2
2 ∑ i2 12 + 22 + 32 + … + n2 ?
3 ∑ i3 13 + 23 + 33 + … + n3 ?

Let’s assume that we need the solutions for the last two rows (and Google is down at the moment 😉 ). Using the “designing with verification in mind” approach, we can work them out for ourselves.

Sum of squares

Noticing that ∑ i0 is a polynomial of degree 1, and ∑ i1 is a polynomial of degree 2, we might suspect that the sum of i2 is a polynomial of degree 3. If so, the problem amounts to finding the coefficients for ∑ i2 = an3 + bn2 + cn + d. Let’s assume that as the structure of the solution, and apply test cases to that assumption.

Test case 0: The sum must be zero when n is zero; a03 + b02 + c0 + d = d, therefore d must be zero. That streamlines the polynomial to ∑ i2 = an3 + bn2 + cn.

Test case 1: Adding (n+1)2 to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.

  an3 + bn2 + cn + (n+1)2
an3 + (b+1)n2 + (c+2)n + 1
and  
  a(n+1)3 + b(n+1)2 + c(n+1)  
an3 + (3a+b)n2 + (3a+2b+c)n +(a+b+c)

These two reduced expressions can only be equal if we can satisfy the equations implied by the coefficients for n2, n1, and n0:

n?   Coefficients   Satisfied if
n2   b+1 = 3a+b   a = 1/3
n1   c+2 = 3a+2b+c   b = 1/2
n0   1 = a+b+c   c = 1/6

So we have a solution of 1/3 n3 + 1/2 n2 + 1/6 n, which we can factor to n(n+1)(2n+1)/6.

Sum of cubes

Based on that success, we’ll pursue the coefficients for ∑ i3 = an4 + bn3 + cn2 + dn + e.

Test case 0: The sum must be zero when n is zero; a04 + b03 + c02 + d0 + e = e, therefore e must be zero, therefore ∑ i3 = an4 + bn3 + cn2 + dn.

Test case 1: Adding (n+1)2 to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.

  an4 + bn3 + cn2 + dn

+ (n+1)3
an4 + (b+1)n3 + (c+3)n2 + (d+3)n + 1
and  
  a(n+1)4 + b(n+1)3 + c(n+1)2 + d(n+1)  
an4 + (4a+b)n3 + (6a+3b+c)n2 + (4a+3b+2c+d)n + (a+b+c+d)

Again, we look at the coefficients for descending powers of n:

n?   Coefficients   Satisfied if
n3   b+1 = 4a+b   a = 1/4
n2   c+3 = 6a+3b+c   b = 1/2
n1   d+3 = 4a+3b+2c+d   c = 1/4
n0   1 = a+b+c+d   d = 0

So we have a solution of 1/4 n4 + 1/2 n3 + 1/4 n2, which we can factor to (n(n+1)/2)2.

The punchline

“That’s no exponential, that’s my polynomial!” But seriously, folks…

The mathematicians among us–those who haven’t died of boredom–will recognize what we’ve done as an informal (hand-waving) inductive proof.

But I hope that the programmers among us will recognize what we’ve done as test-driven development and (re)factoring. Allowing verification to drive design offers a number of benefits, including:

  • The result itself is verifiable.
  • Each step along the way is more obvious and involves less risk.
  • The result is usually cleaner and more understandable.

Incidentally, I first encountered the term “factoring” applied to programs in Leo Brodie‘s book Thinking Forth (1984). Given my background in Mathematics, it immediately clicked with me. Brodie introduced the idea with a quotation which seems entirely contemporary, once we get past the dated terminology:

“If a module seems almost, but not quite, useful from a second place in the system, try to identify and isolate the useful subfunction. The remainder of the module might be incorporated in its original caller.”

(The quotation is from “Structured Design“, by W.P. Stevens, G.J. Myers, and L.L. Constantine, IBM Systems Journal, vol. 13, no. 2, 1974, © 1974 by International Business Machines Corporation)

As a working developer, I’m excited by the potential of these techniques and the underlying ideas. But as a card-carrying member of the Sid Dabster fan club, I’m constantly amazed at the number of ideas that were actively in play in our field thirty or forty years ago that are now regarded as new, innovative, or even radical.


Recommended reading:

A composite function

The previous post began working with problem 3 from Project Euler. The allFactors function produced a list of the prime factors of its argument (all occurrences of all prime factors), such as:

  scala> val nums = allFactors(96000)
  nums: List[Long] = List(5, 5, 5, 3, 2, 2, 2, 2, 2, 2, 2, 2)

It would be nice to have a representation that’s a bit closer to the conventional 53 × 31 × 28, which makes clear the exponent for each prime factor.

In thinking about solutions to that issue, we get a prime use case for Scala’s composite nature as a hybrid OO/FP language. In addition, we encounter another heuristic for avoiding errors (functional or otherwise).

A separate piece

One easy solution is to write a separate function that collapses a list of values into a list of value–count pairs, for which we can define a type alias as:

  type PrimePower = (Long, Int)

The partition method on List is perfect for our current need; given a condition (represented as a boolean-valued function), it splits the list into the prefix that satisfies the condition and the remainder.

  scala> nums.partition(_ == 5)
  res82: (List[Long], List[Long]) = (List(5, 5, 5),List(3, 2, 2, 2, 2, 2, 2, 2, 2))

Of course, the length of the prefix is the exponent, so the function almost writes itself:

  def groupCount(fs: List[Long]): List[PrimePower] = fs match {
    case Nil => Nil
    case p :: _ =>
      val (ps, rest) = fs.partition(_ == p)
      (p, ps.length) :: groupCount(rest)
  }

If factor list is empty, the result is an empty list. On the other hand, if fs begins with some prime p, we split the prefix containing all ps from the rest of the list, and return a list containing the PrimePower for p consed onto the groupCounted rest of the original list.

  scala> groupCount(nums)              
  res86: List[()] = List((5,3), (3,1), (2,8))

The advantages of writing this as a separate function are ease of testing and availability for re-use. On the other hand, it would be interesting to see the consequences of directly producing the prime powers.

One list to bind them

For reference, here’s the function from last time, with the inner alternatives labeled for reference and highlighting on the parts we need to consider:

  def allFactors(t: Long) = {
    def loop(q: Long, fs: List[Long], d: Long): List[Long] =
      if (q == 1)
        fs                        // case 1
      else if (q < d * d)
        q :: fs                   // case 2
      else if (q % d == 0)
        loop(q / d, d :: fs, d)  // case 3
      else
        loop(q, fs, d + 1)        // case 4
    loop(t, Nil, 2)
  }

The result will be List[PrimePower], which is the new type of the second argument and result of loop.

Case 2 deals with a quotient which we know to be prime, because all possible divisors have been exhausted. Given that q occurs exactly once, we replace q :: fs with (q, 1) :: fs as the result.

Case 3 requires the most thought. It only accounts for a single occurrence of a divisor; q / d removes one factor of d from q and d :: fs saves that occurrence in the list being accumulated.

Help me, Rhonda!

We could use a helper function to handle d, based on whether it is another occurrence of the previous divisor or a new divisor not previously seen. Because that distinction is based on data in the list, it seems natural to drive the logic by matching on the list:

  def countFactor(f: Long, fs: List[PrimePower]) = fs match {
    case (d, c) :: rest if f == d => (d, c + 1) :: rest
    case _                        => (f, 1) :: fs
  }

To test countFactor, we need to iterate it over a sequence of values for f with fs initially Nil. This looks like a job for a fold!

  def testCountFactor(ls: List[Long]) =
    ls.foldRight(Nil: List[PrimePower])(countFactor(_, _))

In the interest of space, I won’t include all the test scenarios: Nil, a list of length 1, all values equal, all values different, etc. As the old saying goes, checking those is “left as an exercise for the reader”.

True confessions

Substituting a call to countFactor into case 3, along with the other changes already discussed, gives us this…

  def allFactorsCounted(t: Long) = {
    def loop(q: Long, fs: List[PrimePower], d: Long): List[PrimePower] =
      if (q == 1)
        fs                                    // case 1
      else if (q < d * d)
        (q, 1) :: fs                          // case 2
      else if (q % d == 0)
        loop(q / d, countFactor(d, fs), d)    // case 3
      else
        loop(q, fs, d + 1)                    // case 4
    loop(t, Nil, 2)
  }

…which behaves as follows…

  scala> val nums2 = allFactorsCounted(96000)                                            
  nums2: List[()] = List((5,1), (5,2), (3,1), (2,8))

…or, I should say, “misbehaves as follows”! What went wrong?

The root cause of this error is the special-case code in case 2, which is doing two things at once: handling a prime factor (in its own way!) and terminating the recursion. Put another way, the design breaks the symmetry between cases 2 and 3, both of which identify a prime factor and do something with it. We can make that symmetry more obvious by rewriting case 2 as in the following:

  def allFactorsCounted(t: Long) = {
    def loop(q: Long, fs: List[PrimePower], d: Long): List[PrimePower] =
      if (q == 1)
        fs
      else if (q < d * d)
        loop(1,   countFactor(q, fs), d)
      else if (q % d == 0)
        loop(q / d, countFactor(d, fs), d)
      else
        loop(q, fs, d + 1)
    loop(t, Nil, 2)
  }

And now each case in the inner loop has exactly one responsibility:

  1. Terminates the computation because all factors have been found;
  2. Removes q as a known factor;
  3. Removes d as a known factor;
  4. Moves past an unsuccessful divisor.

Mixing concerns in a single part of the code creates these risks:

  • The code can become harder to understand;
  • Symmetries can be hidden or broken;
  • Change driven by (only) one concern can have unintended consequences.

For me, the moral of this error is this:

Each part of the code (especially alternatives within a single function) should “Do one thing, and do it well!”

An object lesson

In the OO world, primitive obsession is regarded as a code smell. Notice that q and fs together represent the current state of our factorization; we have the invariant that q times the product of fs equals the original number to be factored! Failing to maintain that invariant would likely break the code. The clue to watch for is that the (tail-)recursive calls of the inner loop function either alter both of those parameters or neither of them. This is telling us that they are related aspects of a single concept.

OO gives us a technique for managing related data: put them in an object as private fields, expose the minimum methods required by the object’s client, and ensure that those methods maintain the invariants. We can refactor the quotient and factor list into the following class:

  class Factorization(val n: Long, val fs: List[PrimePower]) {
    def isComplete = n == 1
    def move(d: Long): Factorization = {
      def remove(i: Int, q: Long, d: Long): Factorization =
        if (q % d == 0)
          remove(i + 1, q / d, d)
        else
          new Factorization(q, (d, i) :: fs)
      if (n % d == 0)
        remove(0, n, d)
      else
        this
    }
  }

Each instance of Factorization represents some stage of the factorization of a number by n (the non-factored part) and fs (the factors previously found). An instance can tell whether it is a complete factorization (all factors have been extracted). Finally, an instance can return a Factorization resulting from move-ing all occurrences of a given divisor from the non-factored part to the factor list.

Notice that a Factorization is immutable; the result of a move will be the same instance if the divisor isn’t a factor, or a new instance resulting from the extraction of that divisor.

With this class defined, our factorization function is even more conceptually streamlined:

  def factorize(n: Long): List[PrimePower] = {
    def loop(f: Factorization, d: Long): Factorization = {
      if (f.isComplete)
        f
      else if (f.n < d * d)
        loop(f.move(f.n), d)
      else
        loop(f.move(d), d + 1)
    }
    loop(new Factorization(n, Nil), 2).fs
  }

If the factorization is complete, then its factor list is the desired result. Otherwise, ask the factorization to remove a candidate divisor and continue with the result of that request. The inner loop function is only concerned with managing the sequence of divisors until the factorization is complete.

So the second moral of this post is this:

Whenever part of a function’s parameter list represents some concept, consider making that explicit by using a class for the concept.

Doing so will allow that concept to be implemented and tested independently, and will improve the clarity of the remaining code. Although the example in this post is a very small programming task, I think it makes a nice demonstration of the value of Scala’s hybrid nature.

Prime time programming

The purpose of computing is insight, not numbers.” – Richard Hamming

My interest in pursuing this series is not in the numbers, nor the Mathematics, but in the process of constructing reliable solutions. I’m also learning Scala, and want to understand how its hybrid OO/FP nature may offer fresh perspectives to my pursuit of programming.

The largest factor

The third problem from Project Euler asks for the largest prime factor of a number N = 600,851,475,143. One approach would be produce the prime factors of N and return the max of that set. An obvious way to do that is to generate primes and try dividing each into N, but:

  1. Generating (or testing for) primes requires more code than generating divisors of N.

    We can find all factors of N by testing divisors 2 … n (where n = √ N). The sieve of Eratosthenes yields all primes at most n with effort of O(n log n), compared with O(n) for simply dividing N by all of the candidate divisors.

  2. Factoring just one single number doesn’t appear to provide a return on the investment of building the collection of primes.

    Had the problem called for factoring multiple numbers of comparable size, then the cost of constructing a set of primes could have been amortized over multiple uses. I’ll revisit this later.

  3. The smallest factor of N must be prime.

    The smallest factor of N can’t have any factors smaller than itself (else it wouldn’t be the smallest). If it doesn’t have any factors smaller than itself, it’s prime.

We can base a simple design on those ideas:

  • Removing all occurrences of the smallest factor of N leaves a quotient. Unless that quotient is 1, its largest prime factor is the largest prime factor of N. If the quotient is 1, the last factor removed is the largest prime factor of N.
  • We find factors by checking increasingly larger candidate divisors, beginning with 2. We don’t need to restart the divisors for a new quotient, because all smaller divisors have already been eliminated.
  • If the square of the current candidate divisor exceeds the current quotient then that quotient itself is prime (because all of its other possible divisors have already been eliminated).

First version

It’s more trouble to write out those bullet points in English than it is to write the code:

  def lastFactor(t: Long) = {
    def loop(q: Long, f: Long, d: Long): Long =
      if (q == 1)
        f
      else if (q < d * d)
        q
      else if (q % d == 0)
        loop(q / d, d, d)
      else
        loop(q, f, d + 1L)
    loop(t, 1L, 2L)
  }

As an aside, let me mention an issue of style. I’ve noticed that published functional programs tend to be written using short (some would say “cryptic”) variable names. Here’s the same definition, using words instead of initials:

  def lastFactor(target: Long) = {
    def loop(quotient: Long, factor: Long, divisor: Long): Long =
      if (quotient == 1)
        factor
      else if (quotient < divisor * divisor)
        quotient
      else if (quotient % divisor == 0)
        loop(quotient / divisor, divisor, divisor)
      else
        loop(quotient, factor, divisor + 1)
    loop(target, 1, 2)
  }

My speculation is that the functional style tends to emphasize the structure of the function itself; longer names seem to put the emphasis on the variables. A “meatier” style makes it harder to see the “bones”. As I experiment with various approaches to a problem, I find myself using shorter names in an attempt to highlight the structural differences. I’ll stay with that style here (with apologies to anyone who finds it harder to read).

Back on the main topic, notice that the inner function loop deals with two different issues in the two tail-recursive calls:

  1. loop(q / d, d, d) ensures that all occurrences of the current trial divisor are factored out (retaining the current divisor as the new largest factor), and
  2. loop(q, f, d + 1L) proceeds to the next trial divisor (with the current quotient and factor).

As I mentioned in an earlier post, Scala compiles direct tail recursion into tight, fast bytecode that doesn’t grow the activation stack. On my laptop, it finds the largest prime factor of 600,851,475,143 in about 66µs. (If you don’t want to waste a few microseconds running the code yourself, the answer is 6,857.)

By way of contrast, an equivalent function using a pre-computed list of primes requires only about 15µs. However, producing that list (all primes from 2 to 775,147) via a simple sieve takes about 61ms (yes, that’s 61,000µs). Clearly we’d need to use that list quite a few times to recover the investment.

A factor saved…

Even if I didn’t know that future Project Euler problems will revisit factoring (my experiments are ahead of my writing), I would prefer not to throw information away. How much would the solution change if we decided to keep all factors instead of just the largest? Not much; just a few adjustements to the inner loop function:

  • Parameter f: Long (the most recent factor) becomes fs: List[Long] (a list of all factors).
  • The initial value for fs is Nil (the empty list).
  • Everywhere f gets a new value, fs gets a value consed onto the front.

The changes are highlighted in the new function below:

  def allFactors(t: Long) = {
    def loop(q: Long, fs: List[Long], d: Long): List[Long] =
      if (q == 1)
        fs
      else if (q < d * d)
        q :: fs
      else if (q % d == 0)
        loop(q / d, d :: fs, d)
      else
        loop(q, fs, d + 1)
    loop(t, Nil, 2)
  }

Running that version with the specified value of N returns the complete list of factors…

scala> allFactors(600851475143L)
res19: List[Long] = List(6857, 1471, 839, 71)

…and does so almost for free, taking about 67µs instead of 66µs on my laptop. (I’ve only run a few million tests, so that small a difference is almost statistical noise.) That’s not really surprising, as the only extra overhead is a call to :: as each factor is saved. Even if our target had been a power of two (producing the most factors) there would have been only about 40 factors on the list.

…is a factor folded

“Cut-and-paste” is regarded as bad practice in OO. The DRY principle says that if we find ourselves repeating the same code, or pattern of code, we should extract a reusable form of the common logic. The same principle applies in FP.

Notice that lastFactor and allFactors have exactly the same higher-level structure; the differences are summarized in the following table, which also suggests a general case for some unknown type T.

  lastFactor allFactors general case
factor
type
Long Long Long
accumulator
type
Long List[Long] T
"zero"
value
1 Nil ?
composition
type
LongLongLong LongList[Long]List[Long] LongTT
composition
function
replacement
(a, b) → a
cons
(f, fs) → f :: fs
?

So, just for fun here is a generic Scala function that processes the prime factors of an argument, using a caller-suppllied "zero" and composition function, with the differences highlighted:

  def foldFactors[T](t: Long)(z: T)(f: (Long, T) => T) = {
    def loop(q: Long, acc: T, d: Long): T =
      if (q == 1)
        acc
      else if (q < d * d)
        f(q, acc)
      else if (q % d == 0)
        loop(q / d, f(d, acc), d)
      else
        loop(q, acc, d + 1)
    loop(t, z, 2)
  }

We can get the "last" factor and the complete list of factors by providing the appropriate zero and composition function for each.

  scala> foldFactors[Long](n)(1)((a,b) => a)       
  res32: Long = 6857
  scala> foldFactors[List[Long]](n)(Nil)(_ :: _)
  res33: List[Long] = List(6857, 1471, 839, 71)

To be realistic, allFactors is really our best solution. It returns the factors as List[Long]], which means that we get foldLeft for free, along with everything else that List provides. But this was an irresistible opportunity to point out that higher-order functions and type parameterization address the DRY principle that we already know from OO.

To be continued

I think that's enough to digest for now. There's another punch line that resulted from exploring this problem, but it deserves its own write-up. If you want a head-start on it, think about the fact that the result of allFactors is a bit inconvenient for some purposes, especially when the argument has many factors, as in:

  scala> allFactors(86400)
  res35: List[Long] = List(5, 5, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2)

Contrast that with the "normal" Mathematical representation of 52 × 33 × 27 for a hint. That's where I found myself reaching for the hybrid nature of Scala.

I noticed that Basildon Coder had a post on this same problem. I held off reading it until after working through my own approach, so that I could look at the similarities and differences in two independently-created solutions. Perhaps you'll find that interesting as well.

Go with the flow

The second problem from Project Euler asks for the sum of the even Fibonacci numbers which are at most 4,000,000.

An aside on the numbers

The problem’s description of the Fibonacci sequence is flawed (or at least non-standard); the problem statement begins the sequence with 1 and 2, showing the first 10 terms as:

1, 2, 3, 5, 8, 13, 21, 34, 55, 89

It is common to see the definition given as:

fib1 = 1, fib2 = 1, fibn = fibn-1 + fibn-2

I’m persuaded by Dijkstra’s line of reasoning, and prefer to use the definition…

fib0 = 0, fib1 = 1, fibn = fibn-1 + fibn-2

…not only for the benefit of a zero origin, but also because it invites the mind to consider completing the story. The equations…

fibn = fibn-1 + fibn-2

…and…

fibn – fibn-1 = fibn-2

…are clearly equivalent, but the second one may help us realize that we can extend the conventional list to be infinite in both directions…

n -7 -6 -5 -4 -3 -2 -1 0  1  2  3  4  5  6 7
fibn 13 -8 5 -3 2 -1 1 0 1 1 2 3 5 8 13

…making fib a total function over the integers. I’ll put off discussing that interesting option until another time, and for the rest of this post use only natural numbers.

Basic solutions

The naive recursive function for the nth Fibonacci number…

  def fibR(n: Int): Int = {
    if (n == 0)
      0
    else if (n == 1)
      1
    else
      fibR(n - 1) + fibR(n - 2)
  }

…requires O(exp n) time to evaluate. We could go for a tail-recursive (iterative) version…

  def fibI(n: Int): Int = {
    def fib0(i: Int, a: Int, b: Int): Int =
      if (i == n) a else fib0(i + 1, b, a + b)
    fib0(0, 0, 1)
  }

…which executes in O(n) time. That’s an improvement, but still leaves us dealing with the Fibonacci sequence one number at a time.

Gently down the stream

The problem calls for a sum across multiple values from the Fibonacci sequence, and both of the above functions calculate “upwards” to reach the specified value. There’s no point in going through the sequence more than once (regardless of how efficiently we can do that), so let’s think of the sequence as a Stream[Int].

Element-wise addition of streams can be done as:

  def add(s1: Stream[Int], s2: Stream[Int]): Stream[Int] =
       Stream.cons(s1.head + s2.head, plus(s1.tail, s2.tail))

A stream of the Fibonacci numbers is then:

  val fibS: Stream[Int] = Stream.cons(0, Stream.cons(1, add(fibS, fibS tail)))

And now we’re back to familiar territory from the first problem. We simply filter for the even values, apply the limit, and sum the results:

  scala> fibS.filter(_ % 2 == 0).takeWhile(_ <= 4000000).foldLeft(0)(_ + _)
  res17: Int = 4613732

We could stop with that solution, but there are a couple of additional techniques worth a mention.

“Go back, Jack, and do it again”

First, we could replace the stream with a simple Iterator[Int] that holds a pair of consecutive Fibonacci numbers:

  class Fibs extends Iterator[Int] {
    var state = (0, 1)
    def hasNext = true
    def next = {
      val (a, b) = state
      state = (b, a + b)
      a
    }
  }

Thanks to the Iterator trait, when we implement hasNext (is there another value?) and next (return the next value), we get everything needed to complete the solution:

  scala> new Fibs().filter(_ % 2 == 0).takeWhile(_ <= 4000000).foldLeft(0)(_ + _)
  res20: Int = 4613732

However, this still generates three times as many values as needed, because only every third Fibonacci number is even:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144

How could we generate only the even Fibonacci numbers? Notice this progression in the numbers above:

fib0 = 0
fib3 = 2
fib6 = 8
fib9 = 34
fib12 = 144

So we’re interested in producing the sequence fib3n rather than fibn if we want to obtain the even values. In our Fib class above, a and b are consecutive values, fibn and fibn+1. From the original definition, we can apply just a little algebra:

fibn+2 = b + a
fibn+3 = fibn+2 + fibn+1 = (b + a) + b = 2 × b + a
fibn+4 = fibn+3 + fibn+2 = (2 × b + a) + (b + a) = 3 × b + 2 × a

If n is a multiple of 3, then (n + 3) is the next multiple of 3. So the above formulas for fibn+3 and fibn+4 give us a way to write an iterator that produces only even Fibonacci numbers:

  class EvenFibs extends Iterator[Int] {
    var state = (0, 1)
    def hasNext = true
    def next = {
      val (a, b) = state
      state = (2 * b + a, 3 * b + 2 * a)
      a
    }
  }

This new iterator eliminates the need for the filter, reducing the solution to:

  scala> new EvenFibs().takeWhile(_ <= 4000000).foldLeft(0)(_ + _)              
  res30: Int = 4613732

This calculates exactly the values of interest, then uses standard methods to provide the sum. These last three versions retain the ability to re-use the parts that generate the Fibonacci numbers (or even Fibonacci numbers) for purposes other than the sum.

For sake of completeness, here’s the completely optimized version…

  def sumEvenFibs(n: Int) = {
    def iter(a: Int, b: Int, s: Int): Int =
      if (a > n) s else iter(2 * b + a, 3 * b + 2 * a, a + s)
    iter(0, 1, 0)
  }

…which buys performance by giving up that reusability.

The burden of FP

A rude awakening…

When the fire alarm sounded at about 2:00 AM, I didn’t want to think about electrical engineering, hydrology, or locksmithing. I wanted a quick fix.

The tiniest of pinhole leaks in the attic water heater drain line had released a fine mist, some of which escaped the footprint of the catch pan. Condensing on the attic floor, it accumulated to the point of dripping through the crack between boards, trickling down to the upstairs hall ceiling and through the hole where the wiring ran from the smoke detecter to the alarm system. Smoke detectors don’t like water.

(Of course, that little exercise in house debugging, with no advance warning and severe time pressure, it totally unlike anything experienced by a professional software developer. 😉 ) And that (believe it or not) brings me to Project Euler and the title of this post. I had heard of Project Euler before, but this post enticed me to visit the site. One look and I was hooked.

The site offers a series of little mathematical problems of gradually increasing difficulty. The result of each task is a number, which you submit for verification that you completed the task. The point, of course, is to think beyond the obvious brute force solution to produce one which continues to be feasible as the task scales.

All in all, the site is a nice collection of progressively larger lab rats. I know that some folks don’t care for such problems, but the very first one rewarded my attention.

Problem 1

My paraphrase: “Sum the positive integers below 1000 that are multiples of 3 or 5.” Simple (to the point of triviality) for any practicing programmer. So there must be more to it than the obvious (and very procedural) Scala implementation…

  def sumi(n: Int) = {
    var s = 0
    for (i <- 1 until n) {
      if (i % 3 == 0 || i % 5 == 0)
        s += i
    }
    s
  }

…and there is. (The ‘i‘ on the end of sumi stands for “imperative”.)

Mathematics and programming

Student programmers are often guided to look for general solutions, especially by their more mathematically-inclined teachers. But generalizing requires thinking about what kinds of generality may be needed, thinking about what is likely to change. OOP’s polymorphism and inheritance encourage us to think about kinds of generality that are harder to express in procedural programming. FP allows easy exploration of even more kinds of change. But having more options means having to make more choices. I found it interesting to take a hard look at the above code to see how many issues were hard-wired into its structure. FP makes it easier to break those issues apart, but at the cost of thinking more abstractly (math-speak for “more generally”).

Highlighting the issues

At the risk of consuming a fair bit of virtual ink, I want to identify some aspects of the task, highlighting the parts of the code that deal with each aspect.

  1. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    I’ve already cheated a little; the parameter n allows us to replace the 1000 with a different limit.

  2. Add all the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    The generator 1 until n produces exactly the values one up to (but not including) n.

  3. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    Confession time: obviously I made the subconscious assumption that the 1000 was more likely to change than the 3 or 5.

  4. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    I unconsciously represented the concept of “being a multiple of” with “has a remainder of zero when divided by”. It makes me ponder how often our programs completely replace a problem domain concept with an implementation choice without even noticing that we’ve done so.

  5. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    The (single!) idea of producing a sum has become multiple steps: initializing and updating a variable, and retrieving its value as the answer.

Functions and patterns

While I wholeheartedly appreciate the contributions of the pattern movement to OO, the approach is evolutionary, not revolutionary. A few decades ago even the humble for-loop had to be recognized as expressing a standard solution to a recurring need. In the FP world, higher-order functions often provide the same sort of “glue” expressed in some patterns. For example pipes-and-filters are function composition in OOP clothing.

The little sumi method above is certainly a function from its argument n to the corresponding sum. But the “function-like” quality stops at the outer most curly braces, because sumi is almost entirely conceived in terms of imperative commands acting on mutable state (i.e. side effects). That has some subtle consequences when compared with an approach that tries to “be functional” all the way down, in the same way that OO changed our perspective when we began to design in terms of objects all the way down.

I believe that most of us would recognize sumi as belonging to this general design:

  • generate a set of values,
  • test those values to select the desired subset, and
  • perform a specified action on that subset,

but those aspects of the design are not cleanly, independently represented in the code. Functional programming offers us a way to do that.

The next post will offer some examples as justification of that claim. Meanwhile, I need to catch up on the sleep interrupted by the fire alarm!

The life of a programmer

Before I end this post, let me close the fire-alarm connection. It’s easy to slip into fire-fighting (or fire-alarm-fighting) mode in day-to-day programming. And when doing so, it’s easy to become impatient with anything that doesn’t lead in a straight line to working code. I recall a collegue’s remark in a discussion about resolving a problem with a production system. Different participants offered a series of suggestions, all beginning with, “Well, you could…”, until he finally blurted out, “I don’t need more options; I need fewer options!”

Sometimes fixing the immediate problem quickly is the right thing to do, in the near term. I unplugged the shorted smoke detector, put a piece of plastic in place to collect the mist and deposit it into the catch pan, and went back to bed. But the second step is to go beyond the symptoms to the root cause and fix that. The plumber has replaced the failed pipe, and the technician with a replacement smoke detector is on the way.

But the third step is to prevent the problem from happening in the first place. Or, as the agile guys say, “Deliver tested working code quickly. But leave yourself set up properly to do the same thing on the next cycle!”

Maybe plastic pipe isn’t the best choice for difficult-to-reach applications, especially when elevated temperatures are involved.