Category Archives: Uncategorized

All ya gotta do…

My older son (also a programmer) called my attention to a bit of JavaScript trivia that illustrates nicely the consequences of subtle interactions among independent, well-intentioned decisions (console transcripts courtesy of node.js).

Everyone agrees that programming languages should give the programmer more control, convenience, and flexibility, right? So all of the following sound like good ideas, don’t they?

Everyone knows that programs should not blow up.

(This is important, among other reasons, because users frequently provide silly input.) Operators should do something reasonable in almost every conceivable situation, and just let the programmer write tests for silly things. Therefore, given

a = "my dog has fleas"

we want the following results:

> a
'my dog has fleas'
> a.substr(0, 2)
> a.substr(3, 3)
> a.substr(42, 7)

because the programmer can always check for out-of-bounds results:

> a.length
> a.substr(42, 7).length

and given

b = [2, 4, 6, 8]

we want these results:

> b
[ 2, 4, 6, 8 ]
> b[0]
> b[3]
> b[5]

because the programmer can always check:

> b.length
> (typeof b[5]) === "undefined"

Everyone knows that floating point arithmetic can overflow.

(Dividing a positive number by zero is a special case of that general situation.) Instead of blowing up the program, let’s just return a unique value that we can check for:

> isFinite(1 / 2)
> isFinite(1 / 0)

and, of course, an overflow must absorb all subsequent computations:

> 1 / 0
> (1 / 0) + 4
> ((1 / 0) + 4) * 10
> (((1 / 0) + 4) * 10) - 5
> ((((1 / 0) + 4) * 10) - 5) / 2

Everyone knows that overloaded operators promote familiarity and economy of notation.

Common mathematical and programming practice lead us to expect the following:

> 2 + 3
> 2.1 + 3.1
> "2" + "3"

Everyone knows that we need automatic type conversions.

It is convenient to have “narrower” values converted automatically to “wider” types when evaluating expressions. So clearly we need this kind of “widening”:

> 2 + 3
> 2.1 + 3.1
> 2 + 3.1
> 2.1 + 3

Now that we’ve established that, the obvious next step is to support this kind of “widening” as well:

> 2 + 3
> "2" + "3"
> 2 + "3"
> "2" + 3

Everyone knows that we need controllable conversion of strings to numbers.

In addition to:

> parseInt("123")

programmers need to deal with binary:

> parseInt("1101", 2)

and other computer-oriented bases:

> parseInt("123", 8)
> parseInt("123", 16)
> parseInt("123", 4)
> parseInt("ff", 16)

Since we can specify the base explicitly, we might as well support this:

> parseInt("123", 7)

or (for those of us who fondly remember the PDP-10):

> parseInt("ff", 36)

After all, programmers always know what they’re doing, and should always get exactly what they ask for.

> parseInt( 1 / 0, 19 )

No, I’m not hating on JavaScript. I’m observing that complex artifacts, such as programs, programming languages, political systems, and economies may not respond to “simple” decisions in the ways that we expect. And a pithy statement that encourages reflection and self-restraint

Don’t repeat yourself.

is very different from simplistic responses to non-simple situations.

You don’t need static typing; just write tests.


If you want to eliminate run-time problems, just used a statically-typed language.

or even

These problems are the fault of the other party; just vote for my candidate.

The title comes from a co-worker on a large project a few years ago. The effort offered several design challenges, and it seemed all to easy for well-meaning participants (and non-participants) to propose simplistic solutions. After a while, his standard response was to say with a grin, “All ya gotta do…

Server-side use aside, node.js includes a local JavaScript interpreter and console that offers a convenient tool for learning and experimentation. The e-book What Is Node? (free as of this writing)
provides a fast-paced introduction, while Node: Up and Running: Scalable Server-Side Code with JavaScript and Node.js in Action
give more detailed coverage.

There’s an Easter-egg link in the article to Dijkstra’s famous Turing Award lecture entitled The Humble Programmer. Despite its length (slightly longer than a typical blog post) and serious tone, it’s worth the time. Care in the construction of our programs is still appropriate, whether we call it “software craftsmanship”, “joint application development”, or “pragmatic programming”.

Even though the title phrase became an abused cliche, Structured Programming (A.P.I.C. studies in data processing, no. 8)
is a classic in our field. Although we have (some) nicer tools at our disposal, a careful reading shows how little the fundamental challenges of programming have changed in 40 years. A collection of Dijkstra’s essays, Selected Writings on Computing: A Personal Perspective paints a broader picture of the issues that have challenged us in roughly that same time frame. It is a mix of technical articles, notes on the computing culture of the day, and amusing (but pointed) parables.


Thinking about code

Among others, Light Table, Code Bubbles, Squeak, and Bret Victor‘s Inventing on Principle talk make me ponder where we’re going as programmers. Yes, I’m not ashamed of the term, even though developer is the currently-fashionable word and programmer has been demeaned by popular mis-use (though not as severely as hacker). But I digress.

On one hand, our infrastructure seems to have developed to the point that we may finally escape the limiting code-as-text-files model. On the other hand, Squeak is in the list to remind myself that Smalltalk got there a long time ago (as did Lisp).


I regard the core challenge of programming as organizing actionable abstractions in a way that I can (re)establish the appropriate context for thinking about them. I regard code complexity as a matter of how much I must keep in my head to understand each single line I examine. Combining those ideas with an infrastructure that can build or retain connections across the code base, between code and helpful narratives, we can envision an environment that can present small, meaningful units of code and keep all of the relevant relationships within immediate reach.

There’s a curmudgeonly voice in the back of my head that tries to connect this possible trend to the warning signs that we live in an increasingly post-literate society that appears to value immediate experience over the patient accumulation of understanding (dare I say wisdom?)

But that is balanced by a hopeful voice that looks to the potential of our tools as a useful response to the increasing complexity and volume of information (and stimuli) with which we must deal.

Sunglasses can filter out harmful UV wavelengths while passing usable light. Spam filters (when they work) filter out contaminants while passing on more useful voices. But these carry the implicit assumption that the hide/view distinction is in principle a static one (even if the learning is dynamic).

But in programming, I need the ability to redefine that distinction constantly, based on the needs of the current task. Better yet, I need a way for the distinction to emerge implicitly from what I do moment by moment. The environments mentioned above seem aimed at affording us that dynamic focusing and filtering ability.

I want to highlight one other property shared by Light Table, Code Bubbles, and the demos of Inventing on Principle, because I believe it crucial to success. Those tools were quite willing to meet us where we are today, with support for programming languages already in widespread use.

The idea of a game-changing way to deal with code is not new. But some of the previous attempts have led to isolationist viewpoints that seemed to ask us to throw away all of our exiting code assets and re-think everything in the new language to which the new platform was tied. (FORTH, I’m looking at you.) That’s fine for students and researchers who focus on learning or producing new ideas, but industry and business demand new tools to connect with what they already have and do. A measure of rework may be necessary, tolerable, or even good, but total displacement is seldom survivable.

Programming is an incredibly conservative industry, especially compared with the exponential advances in hardware capabilities. The history of modular, structured, object-oriented, and functional programming adoption suggests roughly a 30-year adoption cycle into mainstream practice. Coincidentally (or not), 30 years also approximates a career generation.

Perhaps the increasing pace of career re-invention and turnover will help speed the adoption of the next Way to think about programming. But I respectfully suggest that it can’t require me to discard my existing code base and retrain or replace all my programmers. That not only raises the cost of entry, it risks ignoring the incredible value of the domain expertise and connections that they also bring to the daily task of crafting code.

I am encouraged to see brilliant approaches to thinking about (or in) code that don’t require us to throw away everything, but allow us to grow into a new level of performance.

Protect Innocence against Persecuting Assault

Convenience stores sometimes get robbed. That’s wrong.

But suppose lobbyists for the convenience store industry got congress to pass legislation that would authorize the stores to:

  1. Keep assault weapons under the counters;
  2. Use them at will under a “shoot first, ask questions later” policy; and
  3. Exempt them from responsibility for using deadly force if they could say, “he/she looked suspicious to me!”

That is the equivalent of what the backers of SOPA / PIPA are attempting to do.

Don’t just take my word for it. Please watch the video from Khan Academy that explains the reach and risks of these proposals.

Stop PIPA.

But don’t stop there.

(And now, back to our regularly-scheduled programming…)


Stop Oligopolies and Paranoia, America

There was a time when saying, “This is a nation governed by law“, brought honor and pride.

That statement was associated with many others, such as, “All persons are equal before the law“, that emphasized that the same rules applied to everyone, regardless of economics, education, race, or any of the other attributes that in various times and places have been used to diminish the access of some to justice, opportunity, and fair treatment.

Many in this country still hold to those principles, for which I am grateful. As one who grew up during the Civil Rights Movement, I honor the courage of those who used the rule and power of law to move a society toward justice. That journey is not complete, but equality before the law has been a powerful aid along the path to freedom.

But freedom and equality, as with physical health, clean dishes, a mowed lawn, and a bug-free code base, are not persistent states. They require constant scrutiny and action, both preventative and corrective.

Legislative corruption threatens to replace the bright vision of rule by law with the dark specter of power for sale to the highest bidder. Uncontrolled lobbying and so-called “campaign contributions”, unlimited terms of office, and the apparent willful technological ignorance of many in congress, threaten to bring the grim scenarios of the cyberpunk genre to reality.

I do not advocate violating the law. I respect the need for a society to have structures in place by which creativity–artistic or technical–can be encouraged. And that includes paying fairly for its benefits.

But I regard as misguided any attempt to pervert our legal process to allow preemptive or punitive action without due process. I regard as toxic to health and progress the attempts to provide artificial protection for obsolete business or technological structures that resist change simply because people with money want to keep making more money in the same way.

And I regard as fatal to any virtue in the concept of rule by law the risk that our legislative, regulatory, and enforcement institutions are for sale.

Stop SOPA.

But don’t stop there.

(And now, back to our regularly-scheduled programming…)

Artist, graph thyself!

Visual representations of graph structures have been important to programming from the beginning of the craft – earlier than that, if you count circuit diagrams. Most practicing programmers of my acquaintance have struggled with visual complexity as the size or amount of detail of such a diagram increases. So I was interested in an article titled “What is the Best Way to Represent Directionality in Network Visualizations?” that summarized research done at Eindhoven University of Technology.

The researchers compared a variety of techniques for representing directionality of arcs: standard arrows, gradients (light-to-dark, dark-to-light, and green-to-red, and curved and tapered connections. Subjects were shown graphs rendered using different techniques, and asked questions about connections between nodes, with speed and accuracy being analyzed.

According to the summary:

  • one should avoid standard arrows and curves,
  • tapered arcs (from wide to narrow) produced the best results,
  • dark-to-light value gradients were better than light-to-dark (no surprise there, as this is consistent with the “more intense to less intense” result of the tapering), and
  • there appears to be no advantage to combining techniques (I confess a bit of surprise over that conclusion).

But most fascinating to me is the fact that when presenting the results visually, the paper and article both violated the very conclusions just reached! Superior performance among techniques was presented using a directed graph using curved arcs with standard arrowheads! (See the diagram here, or in the article and linked paper.)

It was but a few moments’ work in OmniGraffle to recreate that diagram using the recommended technique (tapered arcs). In addition, I rearranged the nodes to reduce the arc crossings, while keeping the preference flow downward from higher to lower. Here’s the result:


I’m amazed that the researchers and author(s) of the summary ignored the result when presenting the result! Does this tell us something about the force of habit, or perhaps the limitations of existing tools, or something else entirely? I wonder.

Words to avoid: “intuitive”

I’ve been using Stack Overflow quite a bit recently, and can recommend it as an informative source for programmers with specific technical questions. However, nothing in this world is perfect (including me! 😉 ), and some answers to this question on perl “gotcha”s hit one of my Big Red Buttons.

Warning: curmudgeonly meta-rant ahead

Some of the responses (such as “Use of uninitialized value in concatenation…”) are legitimate complaints about limitations of the current implementation. No matter the amount of education or experience, it is reasonable for a programmer to be frustrated when the run-time useful information about an error.

But as someone with several years’ experience using a variety of languages (and no particular favoritism for perl), many of the answers strike me as being complaints that perl isn’t c or c++ (or some other language which the writer prefers). I don’t talk back to my screen, but after complaints that asked (or implied the equivalent of):

“Who would have ever expected ____?”

I really wanted to say

“Anyone who read the manual!”


“Anyone who knew that perl wasn’t ___!”

(On a positive note, my favorite answer is this one that refers readers to the perltrap manpage. That page is a well-organized list of things that might surprise or trip programmers with specific backgrounds or expectations.)

And that brings me to today’s word to avoid: “intuitive”.

It is quite common for descriptions of computer software to describe some feature as being (or not being) “intuitive”. In my experience, that word refers to unconscious conclusions/assumptions based on the experience of the speaker/writer. I believe it can usually be replaced by the phrases “unexpected by me” or “unfamiliar to me“. A key part of those phrases, “to me“, is almost always omitted when a negative opinion is being expressed. Instead of hearing, “I didn’t understand X” (especially from a beginner), many times one hears “X is stupid” or “X is not intuitive” — rewriting an admission of the speaker’s own limitation into a criticism of the subject matter.

On a related note, during my teaching days I often encountered undergraduates — from the US — who would lament that “The metric system is too complicated.” I suggest that it is also important to be able to distinguish unfamiliarity from inherent complexity.

So what am I grumbling about? Simply this: I expect more from programmers than from users!

Suppose I am writing a piece of software to support librarians, plumbers, auto mechanics, or accountants (all worthy, important, and noble callings, chosen at random except for being very different from programming). I believe that it is my job to learn my customer’s world: the terminology, common practices, and problems that make up their daily life. Only by doing so can I serve their needs. I should enter their world, not expect them to enter mine.

In short, programmers should be life-long learners.

It’s perfectly reasonable to evaluate whether a piece of software intended for user consumption meets the needs and experience and assumptions of its intended users. But programmers should understand that different programming languages usually embody different concepts, should expect to learn something new when learning a new language (or other development tool), and should know what learning “feels like”.


The cryptic title refers to a blog post asking for Java 6 on Leopard (Mac OS X 10.5). That’s old news, and to some it’s a closed issue, but not to the rest of us (who still have Core Duo CPUs).

As SoyLatte has proven, there’s no technical barrier to Java 6 on 10.4 (or on 32-bit CPUs), so I can still hear this issue faintly whispering, “I’m not dead yet!”

Neely’s Laws

I first wrote down these observations ten or fifteen years ago, as a result of several tongue-in-cheek conversations. I recently had occasion to apply one of them, so I thought I might as well post them here. Please don’t take them (or me!) any more seriously than I do.

However, I might observe that the first post of “The burden of FP” series can be read as an extended statement of Neely’s Third Law.

Neely’s First Law

(Monotonicity of Complexity)

Complexity is like entropy;
you can’t decrease it and doing almost anything increases it.
You can hide it, cover it up, pretend it’s not there (until later),
or make it somebody else’s problem,
but it won’t go away.

Neely’s Second Law

(The Means/Ends Dictum)

Solve the problem;
don’t solve the solution.

Neely’s Third Law

(The Blind-Spot Mandate)

In any systems design effort,
begin by carefully documenting all your unconscious assumptions.

Neely’s Fourth Law

(The Focus Figment)

A tool is an instrument for limiting the types of work you are able to perform.

Neely’s Fifth Law

(The Skills Lifecycle)

Necessities become options become obsolete become artforms.

Neely’s Sixth Law

(The Intuition Impasse)

“Common sense”
is an oxymoron.

The way to take notes

OK, I enjoyed taking notes at JPR08 with the EeePc, but this guy’s notes are the best I’ve ever seen! I just had to point you to them.