The idea of proving programs correct has been around (and hotly debated) for roughly forty years. The irony is that some proof advocates anticipated by more than thirty years what the agile advocates are now saying:
Designing with verification in mind improves the quality of the resulting design.
To be even more specific, compare these statements:
Design the proof, then write the code so that it meets the obligations of the proof. 
Write the test first, then write the code so that it passes the test. 

—Edsger Dijkstra (paraphrased)  —Kent Beck (paraphrased) 
I’ve experienced the benefits of following both pieces of advice. Let me apply this approach to a little bit of design which I’ll then use in the Project Euler series. There’s a small amount of algebra involved, but if you don’t want to bother with that, let me request that you at least look at the last section for the punchline.
Sum thing
Consider the following sums, where i in each case ranges from 1 to n:
Power  Sum  Expanded form  Closed form  Factored form 

0  ∑ i^{0}  1 + 1 + 1 + … + 1  n^{1}  n 
1  ∑ i^{1}  1 + 2 + 3 + … + n  ^{1}/_{2} n^{2} + ^{1}/_{2} n^{1}  n(n+1)/2 
2  ∑ i^{2}  1^{2} + 2^{2} + 3^{2} + … + n^{2}  ?  
3  ∑ i^{3}  1^{3} + 2^{3} + 3^{3} + … + n^{3}  ? 
Let’s assume that we need the solutions for the last two rows (and Google is down at the moment ðŸ˜‰ ). Using the “designing with verification in mind” approach, we can work them out for ourselves.
Sum of squares
Noticing that ∑ i^{0} is a polynomial of degree 1, and ∑ i^{1} is a polynomial of degree 2, we might suspect that the sum of i^{2} is a polynomial of degree 3. If so, the problem amounts to finding the coefficients for ∑ i^{2} = an^{3} + bn^{2} + cn + d. Let’s assume that as the structure of the solution, and apply test cases to that assumption.
Test case 0: The sum must be zero when n is zero; a0^{3} + b0^{2} + c0 + d = d, therefore d must be zero. That streamlines the polynomial to ∑ i^{2} = an^{3} + bn^{2} + cn.
Test case 1: Adding (n+1)^{2} to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.
an^{3}  + bn^{2}  + cn  + (n+1)^{2}  
=  an^{3}  + (b+1)n^{2}  + (c+2)n  + 1 
and  
a(n+1)^{3}  + b(n+1)^{2}  + c(n+1)  
=  an^{3}  + (3a+b)n^{2}  + (3a+2b+c)n  +(a+b+c) 
These two reduced expressions can only be equal if we can satisfy the equations implied by the coefficients for n^{2}, n^{1}, and n^{0}:
n^{?}  Coefficients  Satisfied if  

n^{2}  b+1  =  3a+b  a  =  ^{1}/_{3}  
n^{1}  c+2  =  3a+2b+c  b  =  ^{1}/_{2}  
n^{0}  1  =  a+b+c  c  =  ^{1}/_{6} 
So we have a solution of ^{1}/_{3} n^{3} + ^{1}/_{2} n^{2} + ^{1}/_{6} n, which we can factor to n(n+1)(2n+1)/6.
Sum of cubes
Based on that success, we’ll pursue the coefficients for ∑ i^{3} = an^{4} + bn^{3} + cn^{2} + dn + e.
Test case 0: The sum must be zero when n is zero; a0^{4} + b0^{3} + c0^{2} + d0 + e = e, therefore e must be zero, therefore ∑ i^{3} = an^{4} + bn^{3} + cn^{2} + dn.
Test case 1: Adding (n+1)^{2} to the sum for n must be equal to the result of substituting (n+1) for n in the polynomial.
an^{4}  + bn^{3}  + cn^{2}  + dn  + (n+1)^{3}  
=  an^{4}  + (b+1)n^{3}  + (c+3)n^{2}  + (d+3)n  + 1 
and  
a(n+1)^{4}  + b(n+1)^{3}  + c(n+1)^{2}  + d(n+1)  
=  an^{4}  + (4a+b)n^{3}  + (6a+3b+c)n^{2}  + (4a+3b+2c+d)n  + (a+b+c+d) 
Again, we look at the coefficients for descending powers of n:
n^{?}  Coefficients  Satisfied if  

n^{3}  b+1  =  4a+b  a  =  ^{1}/_{4}  
n^{2}  c+3  =  6a+3b+c  b  =  ^{1}/_{2}  
n^{1}  d+3  =  4a+3b+2c+d  c  =  ^{1}/_{4}  
n^{0}  1  =  a+b+c+d  d  =  0 
So we have a solution of ^{1}/_{4} n^{4} + ^{1}/_{2} n^{3} + ^{1}/_{4} n^{2}, which we can factor to (n(n+1)/2)^{2}.
The punchline
“That’s no exponential, that’s my polynomial!” But seriously, folks…
The mathematicians among us–those who haven’t died of boredom–will recognize what we’ve done as an informal (handwaving) inductive proof.
But I hope that the programmers among us will recognize what we’ve done as testdriven development and (re)factoring. Allowing verification to drive design offers a number of benefits, including:
 The result itself is verifiable.
 Each step along the way is more obvious and involves less risk.
 The result is usually cleaner and more understandable.
Incidentally, I first encountered the term “factoring” applied to programs in Leo Brodie‘s book Thinking Forth (1984). Given my background in Mathematics, it immediately clicked with me. Brodie introduced the idea with a quotation which seems entirely contemporary, once we get past the dated terminology:
“If a module seems almost, but not quite, useful from a second place in the system, try to identify and isolate the useful subfunction. The remainder of the module might be incorporated in its original caller.”
(The quotation is from “Structured Design“, by W.P. Stevens, G.J. Myers, and L.L. Constantine, IBM Systems Journal, vol. 13, no. 2, 1974, © 1974 by International Business Machines Corporation)
As a working developer, I’m excited by the potential of these techniques and the underlying ideas. But as a cardcarrying member of the Sid Dabster fan club, I’m constantly amazed at the number of ideas that were actively in play in our field thirty or forty years ago that are now regarded as new, innovative, or even radical.
Recommended reading:
 Test Driven Development: By Example (The AddisonWesley Signature Series)
 Test Driven Development: By Example (The AddisonWesley Signature Series)
 Selected Writings on Computing: A Personal Perspective (Texts and Monographs in Computer Science)
 On the Shape of Mathematical Arguments (Lecture Notes in Computer Science)
 Thinking Forth