Author Archives: joelneely

All ya gotta do…

My older son (also a programmer) called my attention to a bit of JavaScript trivia that illustrates nicely the consequences of subtle interactions among independent, well-intentioned decisions (console transcripts courtesy of node.js).

Everyone agrees that programming languages should give the programmer more control, convenience, and flexibility, right? So all of the following sound like good ideas, don’t they?

Everyone knows that programs should not blow up.

(This is important, among other reasons, because users frequently provide silly input.) Operators should do something reasonable in almost every conceivable situation, and just let the programmer write tests for silly things. Therefore, given

a = "my dog has fleas"

we want the following results:

> a
'my dog has fleas'
> a.substr(0, 2)
> a.substr(3, 3)
> a.substr(42, 7)

because the programmer can always check for out-of-bounds results:

> a.length
> a.substr(42, 7).length

and given

b = [2, 4, 6, 8]

we want these results:

> b
[ 2, 4, 6, 8 ]
> b[0]
> b[3]
> b[5]

because the programmer can always check:

> b.length
> (typeof b[5]) === "undefined"

Everyone knows that floating point arithmetic can overflow.

(Dividing a positive number by zero is a special case of that general situation.) Instead of blowing up the program, let’s just return a unique value that we can check for:

> isFinite(1 / 2)
> isFinite(1 / 0)

and, of course, an overflow must absorb all subsequent computations:

> 1 / 0
> (1 / 0) + 4
> ((1 / 0) + 4) * 10
> (((1 / 0) + 4) * 10) - 5
> ((((1 / 0) + 4) * 10) - 5) / 2

Everyone knows that overloaded operators promote familiarity and economy of notation.

Common mathematical and programming practice lead us to expect the following:

> 2 + 3
> 2.1 + 3.1
> "2" + "3"

Everyone knows that we need automatic type conversions.

It is convenient to have “narrower” values converted automatically to “wider” types when evaluating expressions. So clearly we need this kind of “widening”:

> 2 + 3
> 2.1 + 3.1
> 2 + 3.1
> 2.1 + 3

Now that we’ve established that, the obvious next step is to support this kind of “widening” as well:

> 2 + 3
> "2" + "3"
> 2 + "3"
> "2" + 3

Everyone knows that we need controllable conversion of strings to numbers.

In addition to:

> parseInt("123")

programmers need to deal with binary:

> parseInt("1101", 2)

and other computer-oriented bases:

> parseInt("123", 8)
> parseInt("123", 16)
> parseInt("123", 4)
> parseInt("ff", 16)

Since we can specify the base explicitly, we might as well support this:

> parseInt("123", 7)

or (for those of us who fondly remember the PDP-10):

> parseInt("ff", 36)

After all, programmers always know what they’re doing, and should always get exactly what they ask for.

> parseInt( 1 / 0, 19 )

No, I’m not hating on JavaScript. I’m observing that complex artifacts, such as programs, programming languages, political systems, and economies may not respond to “simple” decisions in the ways that we expect. And a pithy statement that encourages reflection and self-restraint

Don’t repeat yourself.

is very different from simplistic responses to non-simple situations.

You don’t need static typing; just write tests.


If you want to eliminate run-time problems, just used a statically-typed language.

or even

These problems are the fault of the other party; just vote for my candidate.

The title comes from a co-worker on a large project a few years ago. The effort offered several design challenges, and it seemed all to easy for well-meaning participants (and non-participants) to propose simplistic solutions. After a while, his standard response was to say with a grin, “All ya gotta do…

Server-side use aside, node.js includes a local JavaScript interpreter and console that offers a convenient tool for learning and experimentation. The e-book What Is Node? (free as of this writing)
provides a fast-paced introduction, while Node: Up and Running: Scalable Server-Side Code with JavaScript and Node.js in Action
give more detailed coverage.

There’s an Easter-egg link in the article to Dijkstra’s famous Turing Award lecture entitled The Humble Programmer. Despite its length (slightly longer than a typical blog post) and serious tone, it’s worth the time. Care in the construction of our programs is still appropriate, whether we call it “software craftsmanship”, “joint application development”, or “pragmatic programming”.

Even though the title phrase became an abused cliche, Structured Programming (A.P.I.C. studies in data processing, no. 8)
is a classic in our field. Although we have (some) nicer tools at our disposal, a careful reading shows how little the fundamental challenges of programming have changed in 40 years. A collection of Dijkstra’s essays, Selected Writings on Computing: A Personal Perspective paints a broader picture of the issues that have challenged us in roughly that same time frame. It is a mix of technical articles, notes on the computing culture of the day, and amusing (but pointed) parables.


Thinking about code

Among others, Light Table, Code Bubbles, Squeak, and Bret Victor‘s Inventing on Principle talk make me ponder where we’re going as programmers. Yes, I’m not ashamed of the term, even though developer is the currently-fashionable word and programmer has been demeaned by popular mis-use (though not as severely as hacker). But I digress.

On one hand, our infrastructure seems to have developed to the point that we may finally escape the limiting code-as-text-files model. On the other hand, Squeak is in the list to remind myself that Smalltalk got there a long time ago (as did Lisp).


I regard the core challenge of programming as organizing actionable abstractions in a way that I can (re)establish the appropriate context for thinking about them. I regard code complexity as a matter of how much I must keep in my head to understand each single line I examine. Combining those ideas with an infrastructure that can build or retain connections across the code base, between code and helpful narratives, we can envision an environment that can present small, meaningful units of code and keep all of the relevant relationships within immediate reach.

There’s a curmudgeonly voice in the back of my head that tries to connect this possible trend to the warning signs that we live in an increasingly post-literate society that appears to value immediate experience over the patient accumulation of understanding (dare I say wisdom?)

But that is balanced by a hopeful voice that looks to the potential of our tools as a useful response to the increasing complexity and volume of information (and stimuli) with which we must deal.

Sunglasses can filter out harmful UV wavelengths while passing usable light. Spam filters (when they work) filter out contaminants while passing on more useful voices. But these carry the implicit assumption that the hide/view distinction is in principle a static one (even if the learning is dynamic).

But in programming, I need the ability to redefine that distinction constantly, based on the needs of the current task. Better yet, I need a way for the distinction to emerge implicitly from what I do moment by moment. The environments mentioned above seem aimed at affording us that dynamic focusing and filtering ability.

I want to highlight one other property shared by Light Table, Code Bubbles, and the demos of Inventing on Principle, because I believe it crucial to success. Those tools were quite willing to meet us where we are today, with support for programming languages already in widespread use.

The idea of a game-changing way to deal with code is not new. But some of the previous attempts have led to isolationist viewpoints that seemed to ask us to throw away all of our exiting code assets and re-think everything in the new language to which the new platform was tied. (FORTH, I’m looking at you.) That’s fine for students and researchers who focus on learning or producing new ideas, but industry and business demand new tools to connect with what they already have and do. A measure of rework may be necessary, tolerable, or even good, but total displacement is seldom survivable.

Programming is an incredibly conservative industry, especially compared with the exponential advances in hardware capabilities. The history of modular, structured, object-oriented, and functional programming adoption suggests roughly a 30-year adoption cycle into mainstream practice. Coincidentally (or not), 30 years also approximates a career generation.

Perhaps the increasing pace of career re-invention and turnover will help speed the adoption of the next Way to think about programming. But I respectfully suggest that it can’t require me to discard my existing code base and retrain or replace all my programmers. That not only raises the cost of entry, it risks ignoring the incredible value of the domain expertise and connections that they also bring to the daily task of crafting code.

I am encouraged to see brilliant approaches to thinking about (or in) code that don’t require us to throw away everything, but allow us to grow into a new level of performance.

Lab Rat Code

My older son has gone back to school to study IT, and we occasionally discuss his courses or internship (though not his homework). As a graphic artist, musician, gamer, and box-builder, he is an experienced user, but thinking as a programmer is new to him. Therefore I find his perspective on programming an interesting counterpoint to my own:

  • I have been programming long enough to have forgotten what seemed clear or opaque when I was a beginner.
  • Our backgrounds (artistic versus mathematical) provided different sets of expectations and metaphors.
  • He can take advantage of, and for granted, an enormous variety of resources that did not exist when I started, from a wealth of alternative programming languages and open-source code to pervasive consumer-level use of the Internet.


He has also encountered a phenomenon that has frustrated me throughout my time as a student, teacher, and practitioner: lab rat code.

Illustrative code in blogs, articles, and textbooks is often unrealistic for the same reason that physics homework refers to friction-free pool tables and the stereotypical psychology lab is filled with rats and mazes. A writer who wants to illustrate a technique needs to apply it to a task that is simple enough not to create distraction. Stated from the other perspective, a reader who doesn’t understand the goal will likely not appreciate the path.

So the canonical first program in a new language prints “Hello, world!” to standard output, often soon followed by YAFG (Yet Another Fibonacci Generator).

The challenge remains for the reader to ignore distraction. I’d like to offer some strategies that I find helpful in that role.

  1. If the sample task seems too trivial, then the author succeeded in picking a goal that doesn’t require much of your attention. Stop reading long enough to sketch out what you regard as the obvious solution, then resume reading to see if the author’s solution offers you any new insights.
  2. If the task seems unfamiliar, then the author may have used more detail than necessary. Skim the problem statement, then examine the solution to see how many of those details actually matter for the illustration.
  3. If the solution seems too heavy, the author may be illustrating it on a problem that doesn’t require its full power. Check to see whether the author follows up with a harder problem that exploits more of the solution’s capabilities. Or create your own example of a harder problem with similar characteristics. Delaying the natural tendency to think, “I can solve this more easily another way!” may provide an opportunity to understand the reasons for the apparent complexity.
  4. Develop a tolerance for uncertainty. I have developed a better appreciation for some concepts only through repeated exposure. (It really doesn’t matter whether that statement is about the concept or about me. The result was worth the process.) Similarly, I have sometimes learned something interesting from a solution to a problem outside my experience or interest. Over time you may develop a sense for what you can safely ignore.

The author who wants to reduce the risk of distraction might consider some of these strategies:

  1. Avoid cliches like the plague. Instead of offering YAFG, come up with a fresher example (unless you know your readers really LIKE Fibonacci numbers.) Which brings me to my next cliche…
  2. Know your audience. I have heard busy practitioners dismiss a technique because (IMHO) they had never been shown its application to a problem about which they cared. A bill-of-materials example might illustrate recursion to an industrial programmer far better than parsing or binary tree traversal.
  3. When using a lab-rat example, be explicit about that fact. At least then your reader will be forewarned.
  4. If realism, or an agenda beyond the single example, prompts you to include more detail than necessary for the current illustration, consider the structure of your presentation. Can those details be delayed? If not, than being explicit about which ones are important to the solution may help your reader avoid bogging down on the less relevant ones.

I’ll try to apply those practices myself, both as reader and writer.

Protect Innocence against Persecuting Assault

Convenience stores sometimes get robbed. That’s wrong.

But suppose lobbyists for the convenience store industry got congress to pass legislation that would authorize the stores to:

  1. Keep assault weapons under the counters;
  2. Use them at will under a “shoot first, ask questions later” policy; and
  3. Exempt them from responsibility for using deadly force if they could say, “he/she looked suspicious to me!”

That is the equivalent of what the backers of SOPA / PIPA are attempting to do.

Don’t just take my word for it. Please watch the video from Khan Academy that explains the reach and risks of these proposals.

Stop PIPA.

But don’t stop there.

(And now, back to our regularly-scheduled programming…)


Stop Oligopolies and Paranoia, America

There was a time when saying, “This is a nation governed by law“, brought honor and pride.

That statement was associated with many others, such as, “All persons are equal before the law“, that emphasized that the same rules applied to everyone, regardless of economics, education, race, or any of the other attributes that in various times and places have been used to diminish the access of some to justice, opportunity, and fair treatment.

Many in this country still hold to those principles, for which I am grateful. As one who grew up during the Civil Rights Movement, I honor the courage of those who used the rule and power of law to move a society toward justice. That journey is not complete, but equality before the law has been a powerful aid along the path to freedom.

But freedom and equality, as with physical health, clean dishes, a mowed lawn, and a bug-free code base, are not persistent states. They require constant scrutiny and action, both preventative and corrective.

Legislative corruption threatens to replace the bright vision of rule by law with the dark specter of power for sale to the highest bidder. Uncontrolled lobbying and so-called “campaign contributions”, unlimited terms of office, and the apparent willful technological ignorance of many in congress, threaten to bring the grim scenarios of the cyberpunk genre to reality.

I do not advocate violating the law. I respect the need for a society to have structures in place by which creativity–artistic or technical–can be encouraged. And that includes paying fairly for its benefits.

But I regard as misguided any attempt to pervert our legal process to allow preemptive or punitive action without due process. I regard as toxic to health and progress the attempts to provide artificial protection for obsolete business or technological structures that resist change simply because people with money want to keep making more money in the same way.

And I regard as fatal to any virtue in the concept of rule by law the risk that our legislative, regulatory, and enforcement institutions are for sale.

Stop SOPA.

But don’t stop there.

(And now, back to our regularly-scheduled programming…)

Technical deficit spending


Hungry and in a hurry, I dropped into the diner and ordered eggs and toast. The server returned shortly with an electric skillet, a toaster, two whole eggs, and two slices of bread. As I cracked the eggs into the skillet, the server came back along the counter with a bag of coffee beans and a grinder, asking, “Want coffee with that?”

At this point, you’re probably questioning my choice of restuarant. Especially if I make a habit of eating there. And what does this have to do with software development?

Deficit Spending

The concept of technical debt is well established by now. In the best case it means buying a near-term benefit by deliberately taking on a future responsibility, as in, “I’l take out a loan to replace my monster truck with a hybrid, and will make the payments out of what I’m saving in gas!” Sometimes we take a short-cut in the interest of advancing a project, but also know that we’ll pay it back later.

It’s more troublesome when the obligation is created by someone who will not have to deal with the consequences. To put it bluntly, separating a decision from its consequences is a recipe for bad design. It is both easier for the decision-maker to add costs and harder for the ones who will bear those costs to see the debt growing.

I propose to borrow the term “deficit spending” to refer to technical debt imposed by someone else. And to keep this from being an abstract rant, will follow with a few additional posts to identify kinds and causes of technical debt spending.

Why Data Structures Matter

Our experience on Day 0 of JPR11 yielded a nice example of the need to choose an appropriate implementation of an abstract concept. As I mentioned in the previous post, we experimented with Michael Barker’s Scala implementation of Guy Steele’s parallelizable word-splitting algorithm (slides 51-67). Here’s the core of the issue.

Given a type-compatible associative operator and sequence of values, we can fold the operator over the sequence to obtain a single accumulated value. For example, because addition of integers is associative, addition can be folded over the sequence:

1, 2, 3, 4, 5, 6, 7, 8

from the left:

((((((1 + 2) + 3) + 4) + 5) + 6) + 7) + 8

or the right:

1 + (2 + (3 + (4 + (5 + (6 + (7 + 8))))))

or from the middle outward, by recursive/parallel splitting:

((1 + 2) + (3 + 4)) + ((5 + 6) + (7 + 8))

A 2-D view shows even more clearly the opportunity to evaluate sub-expressions in parallel. Assuming that addition is a constant-time operation, the left fold:


and the right fold:


require linear time, but the balanced tree:


can be done in logarithmic time.

But the associative operation for the word-splitting task involves accumulating lists of words. With a naive implementation of linked lists, appending is not a constant-time operation; it is linear on the length of the left operand. So for this operation the right fold is linear on the size of the task:


the left fold is quadratic:


and the recursive/parallel version is linear:


Comparing just the “parallel-activity-versus-time” parts of those diagrams makes it clear that right fold is as fast as the parallel version, and also does less total work:


Of course, there are other ways to implement the sequence-of-words concept, and that is the whole point. This little example provides a nice illustration of how parallel execution of the wrong implementation is not a win.

The title of this post is a not-very-subtle (but respectful) reference to “Why Functional Programming Matters“, an excellent summary by John Hughes that deserves to be more widely read.

Purely Functional Data Structures, by Chris Okasaki, covers a nice collection of algorithms that avoid the trap mentioned above.


Also, video from this year’s Northeast Scala Symposium is now on-line for the session on functional data structures in Scala, presented by Daniel Spiewak.

JPR 2011 Day 0

Despite the winter storm and avalanche warnings in the Crested Butte area, the flight into Gunnison went off without a hitch. The last few miles of descent were as bumpy as an old dirt road, but the sky was mostly clear, as were the roads leaving the airport.


A few miles uphill, the story was a bit different, but our Alpine Express driver got us to our door smoothly and safely.


Day Zero is traditionally reserved for explorations of new technologies, alternate languages on the JVM, and collaborative skills-building. I joined a group that focused on Scala. We talked about Guy Steele’s parallel string-splitting algorithm as presented at StrangeLoop 2010, got a demo of parallel collections in Scala 2.9, and worked on Scala koans.


After I pointed the folks at my table to Steele’s published slides and summarized the the algorithm, Michael Barker immediately re-coded it in Scala and we started looking at performance. More on that later.

Inspired by the excellent Ruby koans work led by Jim Weirich, Dick Wall had begun working on Scala koans at a previous year’s CodeMash. Dianne Marsh took up the reins; the informal team continues to refine and add to the material, and welcomes additional participants. The Scala koan collection is currently a work in progress; despite a rough edge or two, I really like this approach to ease into the thought processes of a language and very much appreciate the work of the team.

The end-of-day semi-lightning-talk presentations demonstrated a breadth of interests and subjects that will likely help shape this year’s Roundup:

  • Michael and I discussed our observations on the performance of the parallel string splitter;
  • Dianne and Dick described the Scala koans and their current statusy
  • Joe Sundow summarized the jQuery and JavaScript MVS explorations that he had led through the day;
  • James (“I’m just a Flex guy!”) Ward showed how he’s using Spring and Hibernate on a current demo project;
  • Fred Simon gave a quick demo of Fantom, including its ability to compile to JavaScript.

Handerson Gomes and Jim Hurne get my “Wow!” vote for a joint presentation which really made us all sit up and take notice. In the course of one day, they downloaded the Android development kit, got themselves up to speed, and built and tested a small app which include audio and use of the touch-screen gestures.

It was a great start to what promises to be an excellent week. In other words, it was typical for a Roundup!

Review of Best iPad Apps: The Guide for Discriminating Downloaders by Peter Meyers

“Horseless carriage.” Hold that thought.


I got my iPad as a tool to accomplish things with more mobility and efficiency, not to spend time wandering virtual supermarket aisles looking for the shiniest variation on a theme. Given that, I was immediately attracted by this title from O’Reilly. Both the author, Peter Meyers, and the publisher have the right credentials and reputation to address my need.

The book

The content is well organized, with chapters (and sections) that support both leisurely browsing and focused navigation: At Work, At Leisure, Creative Corner, At Play, At Home, Out and About, For Your Health.

The reviews typically provide an app’s icon (great for quick visual reference), price, reviewed version (important in the fast-moving world of the App Store), publisher, overview, well-organized comments and usage tips, and screen shots for key points.

The rankings Meyers gives were highly consistent with my experience on key apps I regularly use. More important, he is clear about his point of view and why he evaluates as he does—a crucial feature for this type of reference. On my first reading, he introduced me to new and useful possibilities. I will be keeping this book within easy access for ongoing use.

Finally, I must confess a slightly wistful thought that turned out to be premature. I still remember the early days of the World Wide Web, when a variety of printed “yellow-pages to the Web” books appeared. Most of them had a fairly short shelf-life, as the explosive growth of the web left them quickly out of date. I immediately wondered whether this book would have such a future. But…

Horseless carriage?

In its early days, the automobile was often referred to as a “horseless carriage”; most people only thought of it in terms of what they already knew, and hadn’t realized the implications of that new technology. (How many people—and companies—are still trying to think of the web as a magazine, newspaper, radio, television, mailbox, etc. minus some physical attribute, not recognizing it as a new thing that is all and none of the previous media?)

That’s why I regard Meyers’ preface as one of the most enduring parts of this book.

He gets it.

Meyers explicitly focuses on what makes the iPad a new thing, not just a mobile phone or netbook, and uses that understanding to guide his selection and evaluation of apps that are important, note-worthy, or simply enjoyable to use. And that makes this book useful not only to a “discriminating downloader” like me, it makes it a great reference to an aspiring app developer who needs to understand what makes iPad apps different, and to any technophile (iPad owner or not) who wants to understand better the potential of this new thing.

Fear and Testing

I’m encouraged and excited by what I see coming from user-led conferences. I’ve had great first-hand experiences at the Java Posse Roundup (rumor is February 21-25 this coming year), No Fluff Just Stuff, and StrangeLoop (2010); and I’ve really benefitted from materials published on-line by other conferences, such as Devoxx and—a recent discovery—JRubyConf.

I was particularly impressed and inspired by Must. Try. Harder. (also linked from here and here), a presentation by Keavy McMinn, who described her experiences training for Ironman. Part of Keavy’s first blog post on Ironman Cuzumel resonated strongly with something I once saw in some code, hence this article.

By way of background (with vagueness required by confidentiality considerations), I remembered looking at test code wrapped around a fairly subtle bit of business logic that had passed through multiple hands. I believed that I understood the intentions of the original designer, and wanted to confirm or correct that belief. All of which sounds like a perfect fit for test cases, right?


The majority of the test cases dealt with various ways that something could go wrong in configuring the targeted business component. Additional tests dealt with incorrect inputs to a correctly-configured instance, and only a small fraction actually dealt with the behavior of a correctly-configured instance. In fact, there were no tests that answered the particular questions that I had in mind.

How can this be? (And what does this have to do with Keavy’s Ironman experience?)

About half-way down in her Ironman Cozumel blog entry, there’s a section entitled “Fear”. I thought about quoting a key sentence or two, but didn’t want to interfere with her well-crafted flow. So just take a minute and read it (at least—for now—the section under her “Fear” heading, between the two photos).

(Wow! You read quickly! 😉 )

So, as I looked at the test cases, a pattern began to emerge. The largest portion of the tests seemed to orbit around a particular aspect of the configuration that I took for granted, giving that configuration concept far more than its share of attention, IMHO. Which got me to thinking about similar patterns in other test cases. Which prepared me to have an “<em.Aha!</em.” moment when I encountered Keavy’s description of fear of the ocean.


Unbalanced tests that over-emphasize some issues and under-emphasize others may be a subtle hint that fear is influencing the work. I’ve observed two common responses to fear in myself and others: obsession and avoidance. On reflection, I believe that those attitudes show up in tests that I’ve looked at over the years.



We may joke about it, but I’ll admit to going back a few times to make sure the coffee pot is off, jiggling the knob one more time to verify that I really did lock the door, or repeatedly checking my pocket to reassure myself that my airline tickets didn’t jump out and hide under the couch. (OK, forget the “hide under the couch” part.)

In this light, I see the set of test cases mentioned above as being obsessed with a particular configuration issue. Those tests were me, patting my pocket again, trying the door again, pushing the switch again (even though I can clearly see that the light is <em.not</em. glowing…) If I find that my tests appear to be obsessing over a particular aspect of the task at hand, that might be a symptom of fear—a need for comfort and reassurance in the face of of an unfamiliar domain concept, a language feature or API with which I’m uncomfortable, or a requirement that I don’t really understand. Or of moving onto the next use case, which brings me to…



Inadequate attention to testing an aspect of the design may indicate a reluctance to engage fully with the issue. In that connection, Keavy’s description of initial hesitation at the water’s edge brought back a vivid memory for me.

My family loves camping in the Great Smoky Mountains National Park. A popular spot along the Little River Road called “The Sinks” features a waterfall almost directly under a bridge, followed by a long, deep swimming hole with steep rocky banks on both sides. Visitors can climb to a ledge which offers a clear jump into a deep part of the pool. Although that ledge is only about 12 feet above the water level, it looks much higher to a first-time jumper. I don’t laugh at young (or not-so-young) first-timers who hesitate on that ledge, because I still remember my first leap.

Even though my head knew it must be safe—I had seen many people take the plunge before me that day—my stomach and legs hadn’t yet gotten the message. I “paused” for several long moments before stepping into the air.

(I’m not talking about truly high-risk activity here, I should point out. A friend of mine has a truly terrifying story about jumping from height into unfamiliar—and unsafe—water and impaling his foot. So let’s keep it in the pool and well-known swimming holes, kids. And don’t forget your buddy.)

Of course, we developers <em.never</em. procrastinate in the face of a fear-inspiring task! Then again, there’s that bit of legacy code that nobody wants to maintain, much less rewrite. And the lingering suspicion that “starting with the low-hanging fruit” can turn into an excuse to put off the parts we’re secretly fearing.

So, how do I fight back, when an imbalance in my tests shows evidence of fear, either positively (obsession) or negatively (avoidance)?

Suggested antidotes…

First, I should consider whether the test really are over- or under-emphasizing something. Different aspects of the code have different levels of risk and/or consequences, so I don’t expect absolutely even distribution of attention. If there is a true imbalance—even better, if I’m about to create one—I need to recognize that and apply an appropriate antidote.

…to obsession

When I start bogging down, these questions can be useful:

Is this test about the product or about me?
Let me tip my hat to Ruby Koans, which gives beautiful evidence of the power of testing as a way to explore a language or framework. If a test can confirm (or correct) my understanding of something I’m using in this project, I certainly should write it! But I’m equally certain that I should keep it outside my project’s code base.
What does/will this test teach me?
If it conveys useful new information, or affects the code in a material way, fine. But if it’s just a trivial variation on a theme, perhaps I should move on. If I’m wrong, I can always come back and add a test later.
How likely is this?
If this test is exercising a scenario that would almost certainly never occur in normal use, maybe I should take care of the more-likely cases first.
How bad can it be?
Do I really need this last little bit of yak hair?

…to avoidance

Keavy wrote about overcoming by focusing on technique, process, and details under our control. My high-school marching band director kept taking us back to the basics, with scales on our instruments and eight-to-five marching drills until our feet could hit the chalk line precisely, with no last-minute stretches or stutters. In fact, without even looking. For this side of my fears, here are some questions…

What do I not know?
If I can express it as a test, I can get moving.
How would I explain this to ___?
I have gotten past mental blocks by explaining my impasse to a friendly non-programmer. Being clear and non-technical often has amazing benefits! 😉 Fill in the blank with a name of your choice. Sometimes only imagining the conversation is enough. It also helps to imagine it out loud (but not where I could frighten or distract my colleagues!)
Am I trying to swallow an elephant?
The last time I got stalled out, I let circumstances outside the task at hand stampede me. I blurted out a train of thought into code and quickly ended up, as I once heard someone say, “crocheting in logic space“. Admitting that fact to myself allowed me to back out of my self-imposed impasse.

…and a couple of challenges:

Don’t just sit there, DO SOMETHING!
Sometimes even a stumbling start is better than stalling, because it gets me moving. As with explaining my problem to a non-programmer, thinking out loud in code may help me get a better perspective (even if I end up throwing away that first bit).
Don’t just do something, SIT THERE!
In my best ersatz-mystico-philosophical style, sometimes I need to do just the opposite. Stop fidgeting and pacing back and forth. Sit down. Close my eyes. Be still. Take a breath. Think a moment. In that jam there’s probably one log that I can shift a little, and it’s probably not the one I’ve been tugging on unsuccessfully for way too long. Now open my eyes. Look around not in the same place. Oh. Duh. There it is. 

Let me get back to you. I’ve got a test to write.

The value of persistence

After beginning this post, I learned that Keavy finished in Cozumel! Congratulations!

Code koans:

I’m sure there are many more; please let me know.