Category Archives: Project Euler

Design by proof

The idea of proving programs correct has been around (and hotly debated) for roughly forty years. The irony is that some proof advocates anticipated by more than thirty years what the agile advocates are now saying:

Designing with verification in mind improves the quality of the resulting design.

To be even more specific, compare these statements:

Design the proof,
then write the code so that
it meets the obligations of the proof.
  Write the test first,
then write the code so that
it passes the test.
Edsger Dijkstra   (paraphrased)   Kent Beck   (paraphrased)

I’ve experienced the benefits of following both pieces of advice. Let me apply this approach to a little bit of design which I’ll then use in the Project Euler series. There’s a small amount of algebra involved, but if you don’t want to bother with that, let me request that you at least look at the last section for the punchline.

Sum thing

Consider the following sums, where i in each case ranges from 1 to n:

Power Sum Expanded form Closed form Factored form
0 ∑ i0 1 + 1 + 1 + … + 1 n1 n
1 ∑ i1 1 + 2 + 3 + … + n 1/2 n2 + 1/2 n1 n(n+1)/2
2 ∑ i2 12 + 22 + 32 + … + n2 ?
3 ∑ i3 13 + 23 + 33 + … + n3 ?

Let’s assume that we need the solutions for the last two rows (and Google is down at the moment 😉 ). Using the “designing with verification in mind” approach, we can work them out for ourselves.

Sum of squares

Noticing that ∑ i0 is a polynomial of degree 1, and ∑ i1 is a polynomial of degree 2, we might suspect that the sum of i2 is a polynomial of degree 3. If so, the problem amounts to finding the coefficients for ∑ i2 = an3 + bn2 + cn + d. Let’s assume that as the structure of the solution, and apply test cases to that assumption.

Test case 0: The sum must be zero when n is zero; a03 + b02 + c0 + d = d, therefore d must be zero. That streamlines the polynomial to ∑ i2 = an3 + bn2 + cn.

Test case 1: Adding (n+1)2 to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.

  an3 + bn2 + cn + (n+1)2
an3 + (b+1)n2 + (c+2)n + 1
and  
  a(n+1)3 + b(n+1)2 + c(n+1)  
an3 + (3a+b)n2 + (3a+2b+c)n +(a+b+c)

These two reduced expressions can only be equal if we can satisfy the equations implied by the coefficients for n2, n1, and n0:

n?   Coefficients   Satisfied if
n2   b+1 = 3a+b   a = 1/3
n1   c+2 = 3a+2b+c   b = 1/2
n0   1 = a+b+c   c = 1/6

So we have a solution of 1/3 n3 + 1/2 n2 + 1/6 n, which we can factor to n(n+1)(2n+1)/6.

Sum of cubes

Based on that success, we’ll pursue the coefficients for ∑ i3 = an4 + bn3 + cn2 + dn + e.

Test case 0: The sum must be zero when n is zero; a04 + b03 + c02 + d0 + e = e, therefore e must be zero, therefore ∑ i3 = an4 + bn3 + cn2 + dn.

Test case 1: Adding (n+1)2 to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.

  an4 + bn3 + cn2 + dn

+ (n+1)3
an4 + (b+1)n3 + (c+3)n2 + (d+3)n + 1
and  
  a(n+1)4 + b(n+1)3 + c(n+1)2 + d(n+1)  
an4 + (4a+b)n3 + (6a+3b+c)n2 + (4a+3b+2c+d)n + (a+b+c+d)

Again, we look at the coefficients for descending powers of n:

n?   Coefficients   Satisfied if
n3   b+1 = 4a+b   a = 1/4
n2   c+3 = 6a+3b+c   b = 1/2
n1   d+3 = 4a+3b+2c+d   c = 1/4
n0   1 = a+b+c+d   d = 0

So we have a solution of 1/4 n4 + 1/2 n3 + 1/4 n2, which we can factor to (n(n+1)/2)2.

The punchline

“That’s no exponential, that’s my polynomial!” But seriously, folks…

The mathematicians among us–those who haven’t died of boredom–will recognize what we’ve done as an informal (hand-waving) inductive proof.

But I hope that the programmers among us will recognize what we’ve done as test-driven development and (re)factoring. Allowing verification to drive design offers a number of benefits, including:

  • The result itself is verifiable.
  • Each step along the way is more obvious and involves less risk.
  • The result is usually cleaner and more understandable.

Incidentally, I first encountered the term “factoring” applied to programs in Leo Brodie‘s book Thinking Forth (1984). Given my background in Mathematics, it immediately clicked with me. Brodie introduced the idea with a quotation which seems entirely contemporary, once we get past the dated terminology:

“If a module seems almost, but not quite, useful from a second place in the system, try to identify and isolate the useful subfunction. The remainder of the module might be incorporated in its original caller.”

(The quotation is from “Structured Design“, by W.P. Stevens, G.J. Myers, and L.L. Constantine, IBM Systems Journal, vol. 13, no. 2, 1974, © 1974 by International Business Machines Corporation)

As a working developer, I’m excited by the potential of these techniques and the underlying ideas. But as a card-carrying member of the Sid Dabster fan club, I’m constantly amazed at the number of ideas that were actively in play in our field thirty or forty years ago that are now regarded as new, innovative, or even radical.


Recommended reading:

Project Euler 5: Multiplicity

The fifth problem from Project Euler asks for “the smallest number that is evenly divisible by all of the numbers from 1 to 20”. The key, for me, was to paraphrase that as “the least common multiple of 1 to 20”.

The least common multiple (lcm) of two natural numbers is their product divided by their greatest common divisor (gcd, a well-known lab rat). So, the stock definitions of lcm and gcd, along with one quick fold, are all we need.

  def divisibleByAll(lo: Int, hi: Int) = {
    def gcd(a: Int, b: Int) = gcd1(a max b, a min b)
    def gcd1(a: Int, b: Int): Int = if (b == 0) a else gcd(b, a % b)
    def lcm(a: Int, b: Int) = a * (b / gcd(a, b))
    (lo to hi).foldLeft(1)(lcm(_, _))
  }

There is one subtlety in this code; gcd is only there to insure that the (natural) arguments to gcd1 are correctly ordered (larger first). That is only needed once; if a >= b, then a % b cannot be larger than b, so the recursion in gcd1 is guaranteed to maintain the larger-first argument ordering.

Finally, this design encounters integer overflow at 1..24, changing the inner functions to work with Long gets us up to 1..42 before long overflow kicks in.

It should be obvious that the answers for 1..21 and 1..22 are the same as for 1..20, but I had to do a double-take while scanning for the limits. Duh.

Three X eerhT

The fourth task from Project Euler is to find the largest palindrome that is a product of two three-digit numbers. (I’m assuming decimal. 😉 )

Name no one man

The first sub-task is to determine whether the decimal representation of an integer is palindromic. A direct approach yields this:

  def reverseInt(n: Int) = {
    def rev(a: Int, b: Int): Int =
      if (a == 0)
        b
      else
        rev(a / 10, b * 10 + a % 10)
    rev(n, 0)
  }
  def isPalindrome(n: Int) = {
    reverseInt(n) == n
  }

I’m not approaching this by way of string conversion, for three reasons:

  • Nothing in the problem statement involves strings;
  • Expressing the solution in terms of string manipulation doesn’t make the solution significantly easier to understand; and
  • Doing so would be slower.

Now on to the more interesting part.

Seek no monkees

The problem can be solved by searching a two-dimensional space, where each dimension ranges over the three-digit integers. A moment’s thought provides the following additional facts we can use:

  • Multiplication is symmetric, so there’s no need to examine both products of two different numbers (a * b and b * a).
  • The Scala expression 100 to 999 provides the three-digit integers.
  • We can ignore 100 as a candidate, because any multiple of 100 must end in a zero, which isn’t the first digit of any multi-digit decimal number in standard notation.
  • There’s a solution, because 101 × 101 = 10201, which is palindromic. (It’s nice to know that the search function doesn’t need to deal with an empty solution space.)

Going for the obvious, we can construct the products of non-redundant pairs of three-digit numbers, filter the products for palindromes, and capture the maximum of the palindromes:

  def maxOfPalindromes() = {
    (
      for {
        a <- 101 to 999
        b  a max b)
  }

It would be easy to parameterize this for the range to be searched (either by bounds or by number of digits), but I want to focus on the search performance.

Top spot

The version above takes about 152 ms on my laptop, which seems entirely too much time for this small a task. The following diagram shows the search space, as a context for the rest of the post.

search2d.jpg

The names in the diagram describe the state of the search at a given instant:

  • lo and hi — the lower and upper bounds (inclusive) of the range being searched;
  • a and b — the current position in the search space;
  • c — the value at that position (the product of a and b); and
  • x — the largest palindrome found thus far.

In his landmark book, A Discipline of Programming (published 32 years ago!), Edsger Dijkstra stated the “Linear Search Theorem”. I’ll paraphrase it informally for our purposes as:

When searching for the maximum value having some property, examine the values in decreasing order; the first satisfactory value found will be the maximum.

Assuming that we apply this to both a and b, the green area has been examined and the blue area has not. We can express the next state of the search in a decision table (each row’s condition assumes that conditions on all previous rows failed):

Condition Action State changes
next a next b next x
a < lo Search complete
x is the result
n/a
b < a Current row searched,
move to next blue row
a - 1 hi x
c <= x c and rest of this row eliminated,
move to next blue row
a - 1 hi x
isPalindrome(c) c is new max palindrome
(rest of this row is smaller values)
a - 1 hi c
otherwise c eliminated,
continue searching current row
a b - 1 x

The decision table translates directly to a tail-recursive function. I’ll use a bit of scope management to avoid computing c unless it will be used.

  def maxPalindromeIn(lo: Int, hi: Int) = {
    def loop(a: Int, b: Int, x: Int): Int = {
      if (a < lo)
        x
      else if (b < a)
        loop(a - 1, hi, x)
      else {
        val c = a * b
        if (c <= x)
          loop(a - 1, hi, x)
        else if (isPalindrome(c))
          loop(a - 1, hi, c)
        else
          loop(a , b - 1, x)
      }
    }
    loop(hi, hi, 0)
  }
  def maxPalindrome3x3() = maxPalindromeIn(101, 999)

This runs in about 445µs, a speedup factor of well over 300.

I prefer π

Let me round this post off with a few observations.

Performance is not the most important issue. I probably need to state that clearly, having quoted run time so often in this and recent Project Euler posts. I believe that speed, small memory footprint, and any other measure of economy, should be subordinate and subsequent to good design and correctness.

Those who forget history are condemned to repeat it. Over twenty years ago, when the dreaded “G” word was still occasionally heard in polite programming circles, a misguided letter to the editor of the Communications of the ACM attempted to claim that goto was necessary to effective programming and offered a small problem in defense of that claim. Dijkstra’s response addressed the errors and approach of the claim and its “solution”; his demonstration used the idea of the two-dimensional application of the Linear Search theorem.

There are important ideas that transcend programming model. The ideas in A Discipline of Programming are all presented in an imperative style, yet the clarity of design makes the concepts important to FP as well. That book is not an easy read (I worked rather than read through it when I got my copy in 1976) but it rewards the investment richly. David Gries gave a tutorial-style presentation of many of the same core ideas in The Science of Programming. I cannot possibly recommend them highly enough.

Compare and contrast. It’s useful to see how others approach a problem, after you’ve solved it yourself.

My RSS reader is subscribed to the fellow solvers above; many of them have other interesting things to say beyond Project Euler!


Recommended reading:

A composite function

The previous post began working with problem 3 from Project Euler. The allFactors function produced a list of the prime factors of its argument (all occurrences of all prime factors), such as:

  scala> val nums = allFactors(96000)
  nums: List[Long] = List(5, 5, 5, 3, 2, 2, 2, 2, 2, 2, 2, 2)

It would be nice to have a representation that’s a bit closer to the conventional 53 × 31 × 28, which makes clear the exponent for each prime factor.

In thinking about solutions to that issue, we get a prime use case for Scala’s composite nature as a hybrid OO/FP language. In addition, we encounter another heuristic for avoiding errors (functional or otherwise).

A separate piece

One easy solution is to write a separate function that collapses a list of values into a list of value–count pairs, for which we can define a type alias as:

  type PrimePower = (Long, Int)

The partition method on List is perfect for our current need; given a condition (represented as a boolean-valued function), it splits the list into the prefix that satisfies the condition and the remainder.

  scala> nums.partition(_ == 5)
  res82: (List[Long], List[Long]) = (List(5, 5, 5),List(3, 2, 2, 2, 2, 2, 2, 2, 2))

Of course, the length of the prefix is the exponent, so the function almost writes itself:

  def groupCount(fs: List[Long]): List[PrimePower] = fs match {
    case Nil => Nil
    case p :: _ =>
      val (ps, rest) = fs.partition(_ == p)
      (p, ps.length) :: groupCount(rest)
  }

If factor list is empty, the result is an empty list. On the other hand, if fs begins with some prime p, we split the prefix containing all ps from the rest of the list, and return a list containing the PrimePower for p consed onto the groupCounted rest of the original list.

  scala> groupCount(nums)              
  res86: List[()] = List((5,3), (3,1), (2,8))

The advantages of writing this as a separate function are ease of testing and availability for re-use. On the other hand, it would be interesting to see the consequences of directly producing the prime powers.

One list to bind them

For reference, here’s the function from last time, with the inner alternatives labeled for reference and highlighting on the parts we need to consider:

  def allFactors(t: Long) = {
    def loop(q: Long, fs: List[Long], d: Long): List[Long] =
      if (q == 1)
        fs                        // case 1
      else if (q < d * d)
        q :: fs                   // case 2
      else if (q % d == 0)
        loop(q / d, d :: fs, d)  // case 3
      else
        loop(q, fs, d + 1)        // case 4
    loop(t, Nil, 2)
  }

The result will be List[PrimePower], which is the new type of the second argument and result of loop.

Case 2 deals with a quotient which we know to be prime, because all possible divisors have been exhausted. Given that q occurs exactly once, we replace q :: fs with (q, 1) :: fs as the result.

Case 3 requires the most thought. It only accounts for a single occurrence of a divisor; q / d removes one factor of d from q and d :: fs saves that occurrence in the list being accumulated.

Help me, Rhonda!

We could use a helper function to handle d, based on whether it is another occurrence of the previous divisor or a new divisor not previously seen. Because that distinction is based on data in the list, it seems natural to drive the logic by matching on the list:

  def countFactor(f: Long, fs: List[PrimePower]) = fs match {
    case (d, c) :: rest if f == d => (d, c + 1) :: rest
    case _                        => (f, 1) :: fs
  }

To test countFactor, we need to iterate it over a sequence of values for f with fs initially Nil. This looks like a job for a fold!

  def testCountFactor(ls: List[Long]) =
    ls.foldRight(Nil: List[PrimePower])(countFactor(_, _))

In the interest of space, I won’t include all the test scenarios: Nil, a list of length 1, all values equal, all values different, etc. As the old saying goes, checking those is “left as an exercise for the reader”.

True confessions

Substituting a call to countFactor into case 3, along with the other changes already discussed, gives us this…

  def allFactorsCounted(t: Long) = {
    def loop(q: Long, fs: List[PrimePower], d: Long): List[PrimePower] =
      if (q == 1)
        fs                                    // case 1
      else if (q < d * d)
        (q, 1) :: fs                          // case 2
      else if (q % d == 0)
        loop(q / d, countFactor(d, fs), d)    // case 3
      else
        loop(q, fs, d + 1)                    // case 4
    loop(t, Nil, 2)
  }

…which behaves as follows…

  scala> val nums2 = allFactorsCounted(96000)                                            
  nums2: List[()] = List((5,1), (5,2), (3,1), (2,8))

…or, I should say, “misbehaves as follows”! What went wrong?

The root cause of this error is the special-case code in case 2, which is doing two things at once: handling a prime factor (in its own way!) and terminating the recursion. Put another way, the design breaks the symmetry between cases 2 and 3, both of which identify a prime factor and do something with it. We can make that symmetry more obvious by rewriting case 2 as in the following:

  def allFactorsCounted(t: Long) = {
    def loop(q: Long, fs: List[PrimePower], d: Long): List[PrimePower] =
      if (q == 1)
        fs
      else if (q < d * d)
        loop(1,   countFactor(q, fs), d)
      else if (q % d == 0)
        loop(q / d, countFactor(d, fs), d)
      else
        loop(q, fs, d + 1)
    loop(t, Nil, 2)
  }

And now each case in the inner loop has exactly one responsibility:

  1. Terminates the computation because all factors have been found;
  2. Removes q as a known factor;
  3. Removes d as a known factor;
  4. Moves past an unsuccessful divisor.

Mixing concerns in a single part of the code creates these risks:

  • The code can become harder to understand;
  • Symmetries can be hidden or broken;
  • Change driven by (only) one concern can have unintended consequences.

For me, the moral of this error is this:

Each part of the code (especially alternatives within a single function) should “Do one thing, and do it well!”

An object lesson

In the OO world, primitive obsession is regarded as a code smell. Notice that q and fs together represent the current state of our factorization; we have the invariant that q times the product of fs equals the original number to be factored! Failing to maintain that invariant would likely break the code. The clue to watch for is that the (tail-)recursive calls of the inner loop function either alter both of those parameters or neither of them. This is telling us that they are related aspects of a single concept.

OO gives us a technique for managing related data: put them in an object as private fields, expose the minimum methods required by the object’s client, and ensure that those methods maintain the invariants. We can refactor the quotient and factor list into the following class:

  class Factorization(val n: Long, val fs: List[PrimePower]) {
    def isComplete = n == 1
    def move(d: Long): Factorization = {
      def remove(i: Int, q: Long, d: Long): Factorization =
        if (q % d == 0)
          remove(i + 1, q / d, d)
        else
          new Factorization(q, (d, i) :: fs)
      if (n % d == 0)
        remove(0, n, d)
      else
        this
    }
  }

Each instance of Factorization represents some stage of the factorization of a number by n (the non-factored part) and fs (the factors previously found). An instance can tell whether it is a complete factorization (all factors have been extracted). Finally, an instance can return a Factorization resulting from move-ing all occurrences of a given divisor from the non-factored part to the factor list.

Notice that a Factorization is immutable; the result of a move will be the same instance if the divisor isn’t a factor, or a new instance resulting from the extraction of that divisor.

With this class defined, our factorization function is even more conceptually streamlined:

  def factorize(n: Long): List[PrimePower] = {
    def loop(f: Factorization, d: Long): Factorization = {
      if (f.isComplete)
        f
      else if (f.n < d * d)
        loop(f.move(f.n), d)
      else
        loop(f.move(d), d + 1)
    }
    loop(new Factorization(n, Nil), 2).fs
  }

If the factorization is complete, then its factor list is the desired result. Otherwise, ask the factorization to remove a candidate divisor and continue with the result of that request. The inner loop function is only concerned with managing the sequence of divisors until the factorization is complete.

So the second moral of this post is this:

Whenever part of a function’s parameter list represents some concept, consider making that explicit by using a class for the concept.

Doing so will allow that concept to be implemented and tested independently, and will improve the clarity of the remaining code. Although the example in this post is a very small programming task, I think it makes a nice demonstration of the value of Scala’s hybrid nature.

Prime time programming

The purpose of computing is insight, not numbers.” – Richard Hamming

My interest in pursuing this series is not in the numbers, nor the Mathematics, but in the process of constructing reliable solutions. I’m also learning Scala, and want to understand how its hybrid OO/FP nature may offer fresh perspectives to my pursuit of programming.

The largest factor

The third problem from Project Euler asks for the largest prime factor of a number N = 600,851,475,143. One approach would be produce the prime factors of N and return the max of that set. An obvious way to do that is to generate primes and try dividing each into N, but:

  1. Generating (or testing for) primes requires more code than generating divisors of N.

    We can find all factors of N by testing divisors 2 … n (where n = √ N). The sieve of Eratosthenes yields all primes at most n with effort of O(n log n), compared with O(n) for simply dividing N by all of the candidate divisors.

  2. Factoring just one single number doesn’t appear to provide a return on the investment of building the collection of primes.

    Had the problem called for factoring multiple numbers of comparable size, then the cost of constructing a set of primes could have been amortized over multiple uses. I’ll revisit this later.

  3. The smallest factor of N must be prime.

    The smallest factor of N can’t have any factors smaller than itself (else it wouldn’t be the smallest). If it doesn’t have any factors smaller than itself, it’s prime.

We can base a simple design on those ideas:

  • Removing all occurrences of the smallest factor of N leaves a quotient. Unless that quotient is 1, its largest prime factor is the largest prime factor of N. If the quotient is 1, the last factor removed is the largest prime factor of N.
  • We find factors by checking increasingly larger candidate divisors, beginning with 2. We don’t need to restart the divisors for a new quotient, because all smaller divisors have already been eliminated.
  • If the square of the current candidate divisor exceeds the current quotient then that quotient itself is prime (because all of its other possible divisors have already been eliminated).

First version

It’s more trouble to write out those bullet points in English than it is to write the code:

  def lastFactor(t: Long) = {
    def loop(q: Long, f: Long, d: Long): Long =
      if (q == 1)
        f
      else if (q < d * d)
        q
      else if (q % d == 0)
        loop(q / d, d, d)
      else
        loop(q, f, d + 1L)
    loop(t, 1L, 2L)
  }

As an aside, let me mention an issue of style. I’ve noticed that published functional programs tend to be written using short (some would say “cryptic”) variable names. Here’s the same definition, using words instead of initials:

  def lastFactor(target: Long) = {
    def loop(quotient: Long, factor: Long, divisor: Long): Long =
      if (quotient == 1)
        factor
      else if (quotient < divisor * divisor)
        quotient
      else if (quotient % divisor == 0)
        loop(quotient / divisor, divisor, divisor)
      else
        loop(quotient, factor, divisor + 1)
    loop(target, 1, 2)
  }

My speculation is that the functional style tends to emphasize the structure of the function itself; longer names seem to put the emphasis on the variables. A “meatier” style makes it harder to see the “bones”. As I experiment with various approaches to a problem, I find myself using shorter names in an attempt to highlight the structural differences. I’ll stay with that style here (with apologies to anyone who finds it harder to read).

Back on the main topic, notice that the inner function loop deals with two different issues in the two tail-recursive calls:

  1. loop(q / d, d, d) ensures that all occurrences of the current trial divisor are factored out (retaining the current divisor as the new largest factor), and
  2. loop(q, f, d + 1L) proceeds to the next trial divisor (with the current quotient and factor).

As I mentioned in an earlier post, Scala compiles direct tail recursion into tight, fast bytecode that doesn’t grow the activation stack. On my laptop, it finds the largest prime factor of 600,851,475,143 in about 66µs. (If you don’t want to waste a few microseconds running the code yourself, the answer is 6,857.)

By way of contrast, an equivalent function using a pre-computed list of primes requires only about 15µs. However, producing that list (all primes from 2 to 775,147) via a simple sieve takes about 61ms (yes, that’s 61,000µs). Clearly we’d need to use that list quite a few times to recover the investment.

A factor saved…

Even if I didn’t know that future Project Euler problems will revisit factoring (my experiments are ahead of my writing), I would prefer not to throw information away. How much would the solution change if we decided to keep all factors instead of just the largest? Not much; just a few adjustements to the inner loop function:

  • Parameter f: Long (the most recent factor) becomes fs: List[Long] (a list of all factors).
  • The initial value for fs is Nil (the empty list).
  • Everywhere f gets a new value, fs gets a value consed onto the front.

The changes are highlighted in the new function below:

  def allFactors(t: Long) = {
    def loop(q: Long, fs: List[Long], d: Long): List[Long] =
      if (q == 1)
        fs
      else if (q < d * d)
        q :: fs
      else if (q % d == 0)
        loop(q / d, d :: fs, d)
      else
        loop(q, fs, d + 1)
    loop(t, Nil, 2)
  }

Running that version with the specified value of N returns the complete list of factors…

scala> allFactors(600851475143L)
res19: List[Long] = List(6857, 1471, 839, 71)

…and does so almost for free, taking about 67µs instead of 66µs on my laptop. (I’ve only run a few million tests, so that small a difference is almost statistical noise.) That’s not really surprising, as the only extra overhead is a call to :: as each factor is saved. Even if our target had been a power of two (producing the most factors) there would have been only about 40 factors on the list.

…is a factor folded

“Cut-and-paste” is regarded as bad practice in OO. The DRY principle says that if we find ourselves repeating the same code, or pattern of code, we should extract a reusable form of the common logic. The same principle applies in FP.

Notice that lastFactor and allFactors have exactly the same higher-level structure; the differences are summarized in the following table, which also suggests a general case for some unknown type T.

  lastFactor allFactors general case
factor
type
Long Long Long
accumulator
type
Long List[Long] T
"zero"
value
1 Nil ?
composition
type
LongLongLong LongList[Long]List[Long] LongTT
composition
function
replacement
(a, b) → a
cons
(f, fs) → f :: fs
?

So, just for fun here is a generic Scala function that processes the prime factors of an argument, using a caller-suppllied "zero" and composition function, with the differences highlighted:

  def foldFactors[T](t: Long)(z: T)(f: (Long, T) => T) = {
    def loop(q: Long, acc: T, d: Long): T =
      if (q == 1)
        acc
      else if (q < d * d)
        f(q, acc)
      else if (q % d == 0)
        loop(q / d, f(d, acc), d)
      else
        loop(q, acc, d + 1)
    loop(t, z, 2)
  }

We can get the "last" factor and the complete list of factors by providing the appropriate zero and composition function for each.

  scala> foldFactors[Long](n)(1)((a,b) => a)       
  res32: Long = 6857
  scala> foldFactors[List[Long]](n)(Nil)(_ :: _)
  res33: List[Long] = List(6857, 1471, 839, 71)

To be realistic, allFactors is really our best solution. It returns the factors as List[Long]], which means that we get foldLeft for free, along with everything else that List provides. But this was an irresistible opportunity to point out that higher-order functions and type parameterization address the DRY principle that we already know from OO.

To be continued

I think that's enough to digest for now. There's another punch line that resulted from exploring this problem, but it deserves its own write-up. If you want a head-start on it, think about the fact that the result of allFactors is a bit inconvenient for some purposes, especially when the argument has many factors, as in:

  scala> allFactors(86400)
  res35: List[Long] = List(5, 5, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2)

Contrast that with the "normal" Mathematical representation of 52 × 33 × 27 for a hint. That's where I found myself reaching for the hybrid nature of Scala.

I noticed that Basildon Coder had a post on this same problem. I held off reading it until after working through my own approach, so that I could look at the similarities and differences in two independently-created solutions. Perhaps you'll find that interesting as well.

Go with the flow

The second problem from Project Euler asks for the sum of the even Fibonacci numbers which are at most 4,000,000.

An aside on the numbers

The problem’s description of the Fibonacci sequence is flawed (or at least non-standard); the problem statement begins the sequence with 1 and 2, showing the first 10 terms as:

1, 2, 3, 5, 8, 13, 21, 34, 55, 89

It is common to see the definition given as:

fib1 = 1, fib2 = 1, fibn = fibn-1 + fibn-2

I’m persuaded by Dijkstra’s line of reasoning, and prefer to use the definition…

fib0 = 0, fib1 = 1, fibn = fibn-1 + fibn-2

…not only for the benefit of a zero origin, but also because it invites the mind to consider completing the story. The equations…

fibn = fibn-1 + fibn-2

…and…

fibn – fibn-1 = fibn-2

…are clearly equivalent, but the second one may help us realize that we can extend the conventional list to be infinite in both directions…

n -7 -6 -5 -4 -3 -2 -1 0  1  2  3  4  5  6 7
fibn 13 -8 5 -3 2 -1 1 0 1 1 2 3 5 8 13

…making fib a total function over the integers. I’ll put off discussing that interesting option until another time, and for the rest of this post use only natural numbers.

Basic solutions

The naive recursive function for the nth Fibonacci number…

  def fibR(n: Int): Int = {
    if (n == 0)
      0
    else if (n == 1)
      1
    else
      fibR(n - 1) + fibR(n - 2)
  }

…requires O(exp n) time to evaluate. We could go for a tail-recursive (iterative) version…

  def fibI(n: Int): Int = {
    def fib0(i: Int, a: Int, b: Int): Int =
      if (i == n) a else fib0(i + 1, b, a + b)
    fib0(0, 0, 1)
  }

…which executes in O(n) time. That’s an improvement, but still leaves us dealing with the Fibonacci sequence one number at a time.

Gently down the stream

The problem calls for a sum across multiple values from the Fibonacci sequence, and both of the above functions calculate “upwards” to reach the specified value. There’s no point in going through the sequence more than once (regardless of how efficiently we can do that), so let’s think of the sequence as a Stream[Int].

Element-wise addition of streams can be done as:

  def add(s1: Stream[Int], s2: Stream[Int]): Stream[Int] =
       Stream.cons(s1.head + s2.head, plus(s1.tail, s2.tail))

A stream of the Fibonacci numbers is then:

  val fibS: Stream[Int] = Stream.cons(0, Stream.cons(1, add(fibS, fibS tail)))

And now we’re back to familiar territory from the first problem. We simply filter for the even values, apply the limit, and sum the results:

  scala> fibS.filter(_ % 2 == 0).takeWhile(_ <= 4000000).foldLeft(0)(_ + _)
  res17: Int = 4613732

We could stop with that solution, but there are a couple of additional techniques worth a mention.

“Go back, Jack, and do it again”

First, we could replace the stream with a simple Iterator[Int] that holds a pair of consecutive Fibonacci numbers:

  class Fibs extends Iterator[Int] {
    var state = (0, 1)
    def hasNext = true
    def next = {
      val (a, b) = state
      state = (b, a + b)
      a
    }
  }

Thanks to the Iterator trait, when we implement hasNext (is there another value?) and next (return the next value), we get everything needed to complete the solution:

  scala> new Fibs().filter(_ % 2 == 0).takeWhile(_ <= 4000000).foldLeft(0)(_ + _)
  res20: Int = 4613732

However, this still generates three times as many values as needed, because only every third Fibonacci number is even:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144

How could we generate only the even Fibonacci numbers? Notice this progression in the numbers above:

fib0 = 0
fib3 = 2
fib6 = 8
fib9 = 34
fib12 = 144

So we’re interested in producing the sequence fib3n rather than fibn if we want to obtain the even values. In our Fib class above, a and b are consecutive values, fibn and fibn+1. From the original definition, we can apply just a little algebra:

fibn+2 = b + a
fibn+3 = fibn+2 + fibn+1 = (b + a) + b = 2 × b + a
fibn+4 = fibn+3 + fibn+2 = (2 × b + a) + (b + a) = 3 × b + 2 × a

If n is a multiple of 3, then (n + 3) is the next multiple of 3. So the above formulas for fibn+3 and fibn+4 give us a way to write an iterator that produces only even Fibonacci numbers:

  class EvenFibs extends Iterator[Int] {
    var state = (0, 1)
    def hasNext = true
    def next = {
      val (a, b) = state
      state = (2 * b + a, 3 * b + 2 * a)
      a
    }
  }

This new iterator eliminates the need for the filter, reducing the solution to:

  scala> new EvenFibs().takeWhile(_ <= 4000000).foldLeft(0)(_ + _)              
  res30: Int = 4613732

This calculates exactly the values of interest, then uses standard methods to provide the sum. These last three versions retain the ability to re-use the parts that generate the Fibonacci numbers (or even Fibonacci numbers) for purposes other than the sum.

For sake of completeness, here’s the completely optimized version…

  def sumEvenFibs(n: Int) = {
    def iter(a: Int, b: Int, s: Int): Int =
      if (a > n) s else iter(2 * b + a, 3 * b + 2 * a, a + s)
    iter(0, 1, 0)
  }

…which buys performance by giving up that reusability.

The burden of FP, part 3

Inside out

This series has used the first problem from Project Euler to look at different programming styles; Part 1 asked questions about the obvious imperative solution, and Part 2 used higher-order functions in a functional approach. This part considers performance to take us in a different direction.

Despite their differences in style, all of the previous solutions had the following scheme in common:

ProjectEuler-1-a.jpg

Beginning with the positive integers below a given limit, select those that are multiples of two given values, and compute the sum of the selected set. The higher-order function approach allowed us to think about each of the components individually, composing them into a solution. However, this basic generate-select-process pipeline may hide some wasted effort.

There and back again

First, notice that problem specified multiples of 3 and 5 but the filtering used division, in the disguise of remainder calculation. This works only because the multiple-of property has an easy-to-compute inverse.

Second, just under half of all positive integers (7/15 to be exact) actually fall into the selected set. The generate-and-test scheme becomes more wasteful as the yield becomes smaller. For example, if the problem had been to sum multiples of 97 and 113, then most of the generated numbers would be discarded.

Thinking along these lines suggests another scheme, illustrated below:

ProjectEuler-1-b.jpg

Instead of generating and discarding failed candidates, this approach directly generates the two sets of multiples (e.g., of 3 and 5). Merging those two sets will ensure that duplicates (e.g. 15) are only included once. The upper limit is applied to the merged values, and the results are summed.

I want to maintain separation of concerns here, so I’m going to pretend that we don’t know how to limit the sets of multiples (i.e., that we can’t easily compute the largest multiple of 3 less than 1000). In other words, I’m going to take this opportunity to model the multiples as unbounded sequences, using Scala’s Stream type, which provides a lazy list.

Given integer expressions init and incr, evaluating Stream.from(init, incr) returns the stream of values beginning with init and stepping by incr. For example:

  scala> val odds = Stream.from(1, 2)
  odds: Stream[Int] = Stream(1, ?)
  scala> odds take 20 print
  1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33, 35, 37, 39, Stream.empty

So the stream of multiples of a given value are easy to produce:

  scala> val threes = Stream.from(3, 3)
  threes: Stream[Int] = Stream(3, ?)
  scala> threes take 20 print
  3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 51, 54, 57, 60, Stream.empty

So much for the first part of this design!

Don’t merge the streams!

Ignoring the above advice, here’s a simple recursive function to merge two streams of increasing integers:

  def merge(as: Stream[Int], bs: Stream[Int]): Stream[Int] = {
    val h = as.head min bs.head
    Stream.cons(h, merge(as dropWhile {_ == h}, bs dropWhile {_ == h})
    )
  }

At any point, the merged stream contains the minimum of the input streams followed by merging whatever follows that minimum in both of the original streams.

That kind of terse, recursive definition is fairly common in FP, and is sufficiently unfamiliar to some that it may be a bit off-putting. At the risk of using a tired analogy, it’s like bike-riding; once you can stay on long enough to get down the driveway, you’ve got it made. (And it’s worth the effort, because you can outrun the kids that are still on foot! 😉

As proof of that point, we now have everything we need for the solution to problem one:

  def sumStream(a: Int, b: Int)(n: Int) =
    (merge(Stream.from(a, a), Stream.from(b, b)).takeWhile{_ < n}.foldLeft (0)) {_ + _}

As in part 2, the key is being able to string together the components with minimum fuss. We can easily test the construction of the streams, the merge function, and the limiting via takeWhile (we’ve exercised foldLeft in the previous post). Because there is no shared state, composing them does not alter their behavior.

Twice-told tales

A persistent myth is that high-level languages require more resources than lower-level languages. Although the stream-based solution uses objects to manage future state, that was an explicit tradeoff to avoid producing unsuccessful candidate values. If we are really concerned with performance, functional programming will still support us as we manually optimize our solution.

A stream of multiples can be represented simply by the head of the virtual stream (the next multiple) and the increment. Iterating through the stream is accomplished by adding the increment to the head, so dropWhile amounts to deciding whether the next head of a stream should be the incremented head or the current head.

Applying those special-case representations to our merging strategy, we get this solution:

  def sumRec(a: Int, b: Int)(n: Int) = {
    def iter(ma: Int, mb: Int, t: Int): Int = {
      val m = ma min mb
      def next(mx: Int, x: Int) =
        if (mx == m) mx + x else mx
      if (m < n)
        iter(next(ma, a), next(mb, b), t + m)
      else
        t
    }
    iter(a, b, 0)
  }

The local variable m is the head of the merged streams, the inner next function accomplishes the equivalent of dropWhile, and the iter function simultaneously merges the streams (represented by ma and mb) and accumulates the total (in t).

The iter function is directly tail-recursive; it either terminates with the result or recursively invokes itself with adjusted arguments. Because of this, the Scala compiler actually compiles iter into a tight loop occupying only a handful of JVM bytecodes. It is by far the fastest of all the solutions we’ve examined!

Take my code, please!

For me, there’s a punch line to this series. As implied above, I thought of this last version only after writing sumStream; I created sumRec by rethinking the code to take advantage of the special cases in the problem. I find it interesting that the fastest function was created by first completely ignoring performance and thinking about making the design as clean and modular as possible.

As I’ve believed for years, it’s easier to make correct code faster than to make fast (but buggy) code correct.

The burden of FP, part 2

Do you have anything to declare?

Part 1 looked critically an imperative-style solution to the first problem from Project Euler. This part of the discussion introduces a more functional style, using higher-order functions. The goal is to think more declaratively, familiar territory to any programmer who has used SQL.

In the solution from last time…

  def sumi(n: Int) = {
    var s = 0
    for (i <- 1 until n) {
      if (i % 3 == 0 || i % 5 == 0)
        s += i
    }
    s
  }

…it’s useful to think of the for expression as a query, instead of an imperative control structure. The subexpression 1 until n isn’t hardwired syntax of the for statement, but an expression for a set of values. Just as a program written against a database can be made simpler by enhancing the query, we can simplify the body of the for expression by moving the filter, as follows:

  def sumiq(n: Int) = {
    var s = 0
    for {
      i <- 1 until n
      if i % 3 == 0 || i % 5 == 0
    } s += i
    s
  }

That change is bigger than it might seem at first glance; it applies a filter to a set of values instead of applying a control structure to a single value. We can see this even more clearly by going to the Scala interpreter.

  scala> 1 until 10
  res0: Range = Range(1, 2, 3, 4, 5, 6, 7, 8, 9)

The input expression evaluates to an instance of Range; its values are subsequently pulled out and displayed by the interpreter. A sequence of values can be further filtered by a criterion, either written in-line (anonymously)…

  scala> (1 until 10).filter(i => i % 3 == 0 || i % 5 == 0)
  res1: Seq.Projection[Int] = RangeF(3, 5, 6, 9)

…or defined explicitly:

  scala> val divisibleBy3or5 = (i: Int) => i % 3 == 0 || i % 5 == 0
  divisibleBy3or5: (Int) => Boolean = <function>
  scala> (1 until 10).filter(divisibleBy3or5)
  res2: Seq.Projection[Int] = RangeF(3, 5, 6, 9)

Scala allows us to write single-argument methods as if they were operators, so the last input above can be simplified to:

  scala> 1 until 10 filter divisibleBy3or5
  res3: Seq.Projection[Int] = RangeF(3, 5, 6, 9)

So we now have a variety of ways to get the sequence of “interesting” values; for example, we could expose the “multiples of” concept as curried parameters to get something resembling this…

  def sumMultiples(a: Int, b: Int)(n: Int) = {
    val multiples = (i: Int) => i % a == 0 || i % b == 0
    val candidates = 1 until n filter multiples
    var s = 0
    for (i <- candidates) s += i
    s
  }

…but we still have in-line, imperative-looking code accumulating the sum.

Folding the result

We can extract the sum calculation by defining our own sum-of-sequence-of-integers function…

  def sumSeqInt(ns: Seq[Int]) = {
    var s = 0
    for (n <- ns) s += n
    s
  }

…but why reinvent the wheel? The function sumSeqInt is just a specific example of a more general pattern. Calculating a product or List[Int] has exactly the same structure; we only need to provide the appropriate initial value and operation.

Required
result
Parameters
Initial value Operation
Sum 0 +
Product 1 *
List Nil ::

That higher-order pattern is a fold, and comes in two flavors, depending on whether the accumulation begins at the left- or right-hand end of the sequence. As shown below, the symbolic notations (/: for foldLeft and :\ for foldRight) are visual mnemonics for the expression trees they represent, with the initial value of z (for “zero”) at the bottom left or the bottom right of the tree.

foldLfoldR.jpg

Using a fold, we can now express a complete solution, including parameters for the multipliers, in the form below…

  def sumfl(a: Int, b: Int)(n: Int) =
    (0 /: (1 until n filter(i => i % a == 0 || i % b == 0))) {_ + _}

…which is fairly close to the specification: “the sum of positive integers below n which are divisible by a or b“.

“The soul of wit”

That last version certainly looks very different from typical imperative code written in C or Java. The point is not to be cryptic, but to be concise. Improving the “signal-to-boilerplate ratio” brings two benefits:

  1. We lower the risk of error (typos or otherwise), which are usually proportional to the quantity of code, and
  2. If (when!) the specification changes, we can be more efficient about finding and making the required code changes.

From one perspective, higher-order functions allow us to eliminate explicit control structures, simplifying the organization of the code. From a different angle, they allow us to think in terms of whole data structure, rather than individual data elements (avoiding the functional analogue of primitive obsession.)

In the final post in this series, we’ll come at the problem from a different angle, which allows us to address performance.

The burden of FP

A rude awakening…

When the fire alarm sounded at about 2:00 AM, I didn’t want to think about electrical engineering, hydrology, or locksmithing. I wanted a quick fix.

The tiniest of pinhole leaks in the attic water heater drain line had released a fine mist, some of which escaped the footprint of the catch pan. Condensing on the attic floor, it accumulated to the point of dripping through the crack between boards, trickling down to the upstairs hall ceiling and through the hole where the wiring ran from the smoke detecter to the alarm system. Smoke detectors don’t like water.

(Of course, that little exercise in house debugging, with no advance warning and severe time pressure, it totally unlike anything experienced by a professional software developer. 😉 ) And that (believe it or not) brings me to Project Euler and the title of this post. I had heard of Project Euler before, but this post enticed me to visit the site. One look and I was hooked.

The site offers a series of little mathematical problems of gradually increasing difficulty. The result of each task is a number, which you submit for verification that you completed the task. The point, of course, is to think beyond the obvious brute force solution to produce one which continues to be feasible as the task scales.

All in all, the site is a nice collection of progressively larger lab rats. I know that some folks don’t care for such problems, but the very first one rewarded my attention.

Problem 1

My paraphrase: “Sum the positive integers below 1000 that are multiples of 3 or 5.” Simple (to the point of triviality) for any practicing programmer. So there must be more to it than the obvious (and very procedural) Scala implementation…

  def sumi(n: Int) = {
    var s = 0
    for (i <- 1 until n) {
      if (i % 3 == 0 || i % 5 == 0)
        s += i
    }
    s
  }

…and there is. (The ‘i‘ on the end of sumi stands for “imperative”.)

Mathematics and programming

Student programmers are often guided to look for general solutions, especially by their more mathematically-inclined teachers. But generalizing requires thinking about what kinds of generality may be needed, thinking about what is likely to change. OOP’s polymorphism and inheritance encourage us to think about kinds of generality that are harder to express in procedural programming. FP allows easy exploration of even more kinds of change. But having more options means having to make more choices. I found it interesting to take a hard look at the above code to see how many issues were hard-wired into its structure. FP makes it easier to break those issues apart, but at the cost of thinking more abstractly (math-speak for “more generally”).

Highlighting the issues

At the risk of consuming a fair bit of virtual ink, I want to identify some aspects of the task, highlighting the parts of the code that deal with each aspect.

  1. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    I’ve already cheated a little; the parameter n allows us to replace the 1000 with a different limit.

  2. Add all the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    The generator 1 until n produces exactly the values one up to (but not including) n.

  3. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    Confession time: obviously I made the subconscious assumption that the 1000 was more likely to change than the 3 or 5.

  4. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    I unconsciously represented the concept of “being a multiple of” with “has a remainder of zero when divided by”. It makes me ponder how often our programs completely replace a problem domain concept with an implementation choice without even noticing that we’ve done so.

  5. Sum the positive integers below 1000 that are multiples of 3 or 5.
      def sumi(n: Int) = {
        var s = 0
        for (i <- 1 until n) {
          if (i % 3 == 0 || i % 5 == 0)
            s += i
        }
        s
      }
    

    The (single!) idea of producing a sum has become multiple steps: initializing and updating a variable, and retrieving its value as the answer.

Functions and patterns

While I wholeheartedly appreciate the contributions of the pattern movement to OO, the approach is evolutionary, not revolutionary. A few decades ago even the humble for-loop had to be recognized as expressing a standard solution to a recurring need. In the FP world, higher-order functions often provide the same sort of “glue” expressed in some patterns. For example pipes-and-filters are function composition in OOP clothing.

The little sumi method above is certainly a function from its argument n to the corresponding sum. But the “function-like” quality stops at the outer most curly braces, because sumi is almost entirely conceived in terms of imperative commands acting on mutable state (i.e. side effects). That has some subtle consequences when compared with an approach that tries to “be functional” all the way down, in the same way that OO changed our perspective when we began to design in terms of objects all the way down.

I believe that most of us would recognize sumi as belonging to this general design:

  • generate a set of values,
  • test those values to select the desired subset, and
  • perform a specified action on that subset,

but those aspects of the design are not cleanly, independently represented in the code. Functional programming offers us a way to do that.

The next post will offer some examples as justification of that claim. Meanwhile, I need to catch up on the sleep interrupted by the fire alarm!

The life of a programmer

Before I end this post, let me close the fire-alarm connection. It’s easy to slip into fire-fighting (or fire-alarm-fighting) mode in day-to-day programming. And when doing so, it’s easy to become impatient with anything that doesn’t lead in a straight line to working code. I recall a collegue’s remark in a discussion about resolving a problem with a production system. Different participants offered a series of suggestions, all beginning with, “Well, you could…”, until he finally blurted out, “I don’t need more options; I need fewer options!”

Sometimes fixing the immediate problem quickly is the right thing to do, in the near term. I unplugged the shorted smoke detector, put a piece of plastic in place to collect the mist and deposit it into the catch pan, and went back to bed. But the second step is to go beyond the symptoms to the root cause and fix that. The plumber has replaced the failed pipe, and the technician with a replacement smoke detector is on the way.

But the third step is to prevent the problem from happening in the first place. Or, as the agile guys say, “Deliver tested working code quickly. But leave yourself set up properly to do the same thing on the next cycle!”

Maybe plastic pipe isn’t the best choice for difficult-to-reach applications, especially when elevated temperatures are involved.