mahiwaga

I'm not really all that mysterious

technical merits of microkernels

After switching from Linux to Mac OS X and after playing around with Ruby a little bit, and getting a feel for the philosophies of Objective C and SmallTalk, I guess I’m coming around to Andrew Tanenbaum’s thoughts about microkernels.

Still, I guess I drank the OOP kool-aid back when I was screwing around with Turbo Pascal in the early ’90s. The idea of objects that can respond to focused messages seems to herald the beginnings of machine intelligence. Objects, like neuronal circuits and endocrine feedback loops, tend to be black-boxes. We can begin with learning what kind of message/stimulus the object/neuronal circuit/endocrine feedback loop responds to, and what are its possible outputs. The details of internal processing, while worth elucidating at some point, probably do not give us as much insight into the workings of the system/human brain/human body (nor are they as lucrative for the pharmaceutical industry in terms of determining feasible drug targets.)

In other words, separate the interface from the implementation. The interface tends to be higher-yield, in terms of figuring things out, and learning how to do things, or learning how things work. The implementation is, as we say in the health-care industry, mostly scut-work.


There are more intuitive and less intuitive ways to do OOP. For example, I struggled mightily with C++, the last compiled language I ever worked with. Dynamic/interpreted languages are where its at these days, and Perl, PHP, Python, and Ruby reign supreme (with the last the only one that was consciously developed as a true OOP language.)

The problem with dynamic/interpreted languages, similar to the problem with microkernels, is that they tend to have a lot of runtime overhead. But in these days of base systems running at nearly 3 GHz with around 2 GB of RAM, this overhead tends to be negligible. This argument used to fly when the average system ran at 50 MHz and had 8 MB of RAM, and this was the main reason why I believed that monolithic kernels were the only reasonable way to go on consumer level machines. But these days, most of our CPU cycles are wasted.

A similar issue plagued SmallTalk back in the Xerox PARC days. The system was state-of-the-art and blew everything else out of the water, but you had to have an extremely muscular machine that cost at least $10k minimum to run it.

What a strange and wonderful time and place Moore’s Law has brought us to.


Microkernels are probably going to be key for two different large scale paradigm shifts: (1) virtualization/hypervisors and (2) cloud computing/ubicomp.

Microkernels will make running multiple OSes on a single machine much easier, streamlining the path that Xen and Parallels are taking. And since microkernels engage de facto in distributed computing, not only will it be possible to utilize all four cores of your CPU, it will also be feasible to distribute tasks amongst your personal cloud of high-tech gadgets.

initially published online on:
page regenerated on: