baffling (how i learned to stop worrying and love the gui and high-level languages)
Now, don’t get me wrong. I’m not a developer. The extent of my hacking history lies in the good old 8-bit days when I was hand-coding machine language programs into BASIC DATA statements. I learned, of all things, Pascal (which happened to be the programming language tested on the AP Computer Science test) and tried to muck around with C and C++, but eventually gave up with that and ended up learning Perl instead.
I once wrote an extraordinarily kludgy medical record-keeping application for my dad in Turbo Pascal 5.0 (He still kept hard copies of everything anyway, thankfully) and when I used to work as a medical biller, I wrote a little script to help me keep track of things in Visual Basic for Microsoft Office(!) which rapidly turned into a nightmarish, molasses-like application experience.
So I’ve never really written a program that anyone has (exclusively) depended on, and I did very little bug-fixing, and much less optimization.
I can only imagine what it means to be involved in a massive corporation working on a little subroutine or object class that happens to be crucial to the project, and hoping that it’s well-implemented enough and documented well enough that it won’t bring the entire project crashing down.
On the flip side, I gave up on Windows in 1999, and most of the apps I use are Open Source, or at least based on Open Source. From 1999 to 2002, I was running Red Hat Linux as my main OS, until I decided to get a laptop. (At the time, Linux support for laptops kind of sucked. It would’ve been a major pain in the ass, I think.) I couldn’t stomach the idea of switching back to Windows, so I bought a Mac.
The irony is that as an undergrad, I was a proponent of x86-based hardware, and thereby staunchly against the very proprietary nature of Macintosh hardware. Now I don’t think I ever thought Windows was “cool”, but me and my roommate would have Mac vs. PC arguments all the time.
The other issue I had with the Mac was its GUI-only nature. I had grown up on the command-line, and I couldn’t (and actually still can’t) fathom the idea of interacting with your computer only through a mouse. (Even as I type this, I have a few terminal windows open in Mac OS X. It’s still easier for me to hit Cmd-Tab, type cd ~
, and then open some-file
—using command completion in zsh—than it is for me to click on the Finder icon, navigate to my home folder, scan the thousands of unsorted files sitting there, and finally double-click on it.)
But bizarrely, it was Linux (along with X and GNOME) that taught me to love the GUI. You could do all sorts of weird stuff with the various window managers available, like have windows resize to set dimensions (useful for previewing HTML at small screen sizes) or automatically having all your windows roll up and stack themselves in a line on the right side of the screen. It was a matter of a few mouse-clicks to get a window to stay at the top of the pile, no matter how many other windows you opened up. I also grew dependent on multiple desktops (the fact that this is missing in Mac OS X is only made bearable because of Exposé.) And at the time (in the summer of 2000), GNOME had the only open-source web browser with tabs (Galeon), a feature that all modern browsers now have, and which I can’t use a browser without. (IE 6?! Bleh. Bleh. Bleh.)
The only thing that made switching to Mac OS X palatable was the fact that it had an entire UNIX subsystem running underneath the pretty GUI. Meaning, unlike Mac OS Classic, you could drop out into a terminal, and navigate and manipulate your filesystem with little cryptic commands like cd
, ls
, find
, xargs
, and grep
. And meaning that you could (1) run an X server and (2) compile all the apps that I was using on Linux and use them on the Mac. For a while, Galeon still remained my browser of choice (until I found Camino, neé Chimera)
But X is a hungry beast, and after a while, I got tired of hard drive thrashing, and I couldn’t afford to buy more RAM. So eventually I settled into Mac OS X proper. Right now I have seven applications up, not counting the Finder or the Dashboard: (1) Mail.app (2) Vienna (3) Safari (4) iTerm (5) Chicken of the VNC (6) VLC (7) Preview.app. Five of these seven are open source projects. All of them have equivalents on Linux and Windows (with Cygwin installed.) As you can see, I’m not a big fan of proprietary software. (Hell, I grew up in a time when people would type in huge programs from the back of a magazine, sometimes entirely in raw machine language—just strings of unadorned hexadecimal numbers. My word processor of choice on the Commodore 64 was Speedscript, which came out of the back of Compute!’s Gazette.)
What inspired this rehashing of my own personal computing history was this post on The Old New Thing discussing a very poorly thought out function called IsBadXxxPtr
which supposedly tells you whether or not a pointer is valid or not.
And frankly, I find it astounding. Not so much that these functions exist, because they are probably useful to somebody somewhere trying to debug something. But that these functions would be allowed to escape into production code, resulting in a situation where you can’t just get rid of these functions without breaking apps.
I try to figure out why this should be so, and why such practices never make it into production UNIX code (other than the fact that the standard APIs don’t have functions like these.)
Is it because there are so many Windows developers, in contrast to the number of *nix/Linux/Mac OS X developers? That maybe these kinds of kludgy functions exist in core APIs, but they can be easily deprecated because not that many apps actually rely on their bad behavior? That the apps that would break wouldn’t cause a massive uproar?
I don’t know. I feel like backward compatibility is overrated. If you want to stay compatible, then you shouldn’t upgrade your system. Problem solved. I mean, people still run CP/M and VAX/VMS, don’t they? There are probably even a few MS-DOS boxes floating around somewhere, maybe even connected to a network!
Meanwhile, the bleeding edge can throw out the baby with the bath water and actually innovate. And if you, as a developer, feel that you just can’t live without these new features, then you’ve got to rewrite your app so that you can compile it against the new API.
Isn’t this a massive reduplication of effort? Not necessarily. If you have clean code that is well documented, you might get away with just rewriting a few functions here and there.
This certainly reminds me of the general debate about Windows Vista. Microsoft could’ve pulled an Apple and they could’ve just scrapped all the legacy code and started fresh. And to keep the break from being overly traumatic, they could’ve done what Apple did with Mac OS Classic. They could’ve just turned the old Windows stuff into a stand-alone virtualized system. After all, they already own Virtual PC, so why not? Most people have dual-cores these days, and the other core just sits idly by, why not give it something to do? And maybe they could also have a compatibility layer (akin to Apple’s Carbon API) so that all those Windows developers wouldn’t have to just start from the beginning all over again, and they could continue to develop new apps that would run both on XP and Vista—natively, without having to run the virtualized Windows Classic.
All sorts of doom and gloom was predicted when they abandoned Mac OS entirely and adopted Jobs’ baby, NeXTSTEP, which much of OS X is based on. But that’s now ancient history, with Apple making yet another jump (albeit this time on the hardware side.) And how many people still write Carbon apps these days?
But a lot of these problems stem from the fact that, despite being nearly 40 years old, C is still the dominant language for developers. Sure, there’s C++, but is new
and delete
really all that different from malloc
and free
? Manual memory management seems like such a Sisyphean task. That’s why I quickly gravitated to scripting languages which generally use garbage collection. After all, I learned to program in BASIC, of all things. Commodore’s BASIC 2.0 was infamous for the incredibly long pauses when garbage collection was triggered, but I could live with that. I mean, it wasn’t bad for a computer running at 1 MHz.
In time, we’ll have a generation of coders who have cut their teeth on Java and C# and Python and Ruby, who won’t know or care about dangling pointers. Until then, we’ll have to continue to deal with the kludgy ways that developers utilize in an effort to avoid shooting themselves in the foot, and their crash-provoking consequences.