December 9, 2012
In Grade 4 I really started getting into the programming groove. The high-res graphics capabilities1 of the Apple II Plus begged for tools that took advantage of them. One my first serious2 programming projects was a bitmapped image editor I named SuperDraw. Lovingly handcrafted in Applesoft Basic, it was my crowning achievement at the time. But it couldn't draw circles.
If you think I could just call Ellipse(0,0,50,50), I'll remind you that fancy graphics libraries that abstracted away shape primitives didn't come until years later. If I wanted a circle I would have to figure out where to put each individual pixel. I consulted a family friend who happened to be a math tutor. He started by showing me the equation for a circle:
The square of the x-coordinate plus the square of the y-coordinate is equal to the square of the radius for any point on the circle. Simple, right?
You might imagine a 9-year old Gene puzzling over how to turn this mathematical identity into a recipe for iterating over the x and y-coordinates of a circle of radius r in order to light them up on the monitor. This is actually non-trivial, and if you have some spare time I recommend this excercise as a good way to review your trigonometry.
What I'm getting at though is that a description of a circle, even if it's perfectly accurate and complete, doesn't tell you very much about how to draw one. This is the difference between declarative programming and imperative programming.
The programming with which I grew up was pretty much purely imperative. You gave the computer instructions and it followed them. These instructions might have been in a procedural language (like Applesoft Basic or C), an object-oriented language (like Java or C++) or even a functional language (like Lisp). It all boils down to the same thing: a predictable flow of control through a set of instructions that eventually leads to the desired outcome, or if not, can be traced through step-by-step to isolate the problems.
As the complexity of your problem grows, purely imperative programming can become a bit of a drag. Its effectiveness is based on your ability to abstract away that complexity in layer on top of layer of logic. Near the bottom layer you have methods that calculate the pixels in a circle, somewhere in the middle you have methods that draw complete circles and squares, and near the top you have methods that, for example, draw an organizational chart of your company given a database of the employees and their job titles.
For decent-sized programs that can be a lot of layers and a lot of code to manage. All this complexity: isn't that what computers are good at? Can't we just tell the computer what we'd like achieved and have it figure out how to get there?
Declarative programming is just that. "I'd like to live in a world where there was a circle at (0,0) with radius 25" is the declarative equivalent of "please draw a circle at (0,0) with radius 25." But declarative scales better. You could declare a complete screen layout, with associated constrants ("this circle should always occupy 25% of the window width") and trust the computer to resize the circle as the window is resized. The imperative approach, on the other hand, would require an event handler to catch a resize when it occurred, get a handle to the circle, recalculate the new size, and so on.
At their best, declarative frameworks and approaches can dramatically simplify the specification of complex systems and interactions. And next week I'll illustrate why things don't really work out that way.
1. 280x160, 6 colors, 4 lines of text at the bottom of the screen. Sweeet.
2. I was an immature hack up until then, but Grade 4 was really the turning point.