mahiwaga

I'm not really all that mysterious

the mechanisms of cultural transmission

Wow, this post is going to be extraordinarily geeky. By clicking on various links, I stumbled upon some very well thought out posts regarding the inexorable programming language clashes that in reality actually affects the average Net dependent webhead in ways that may not be readily apparent.

Obviously that last paragraph is going to need some explaining.

What I mean to say is that, how many Netizens really think long and hard about the ramifications of XML, the evangelical drive to implement XHTML and CSS, the prevalence of Java and Flash, the rising eminence of AJAX? The fact that the pioneers were CGI scripts running in Perl? And yet these various geeky behind-the-scenes technologies are what define the experience of the Web.

But, really, the posts aren’t so much about that. What Steve Yegge writes about is the rising pre-eminence of Ruby which for all you non-geeks is the newest shiniest programming language that is coming into its own on the web thanks to the framework known as Rails. But a good chunk of his post also discusses the recent history of languages from which almost all webapps are built, which happen to include Java, Perl, and Python as well. And while Python has been around for probably as long as Perl has been, it just hasn’t gained the amount of popularity that Perl has.

The post also touches upon a common phenomenon in the technosphere, which is that the technically superior technology tends to get killed or at least overshadowed by something that frequently does not work as well and is much uglier. Hence, the famous VHS vs. Betamax debate, the old Mac vs Windows culture wars. Netscape vs IE. The list goes on and on. In his other post which expands on the first one, Yegge also talks about Java vs. Smalltalk. And now we enter the realm of programming language choice amongst web-developers. Originally there was Perl vs. Python, but now Ruby is racing to the forefront.

What interests me most, though, is Yegge’s discussion of culture. While his discourse lives within the rarefied confines of web development, it actually describes how or why memes in general live or die.

Memetics is sort of a new way of looking at an old phenomenon, which is the transmission of human culture. The meme is a concept discussed by Richard Dawkins at the end of his book The Selfish Gene (with a big shoutout to Mr. G, my biology and chemistry teacher in high school, who first introduced me to this book.) Bear in mind this was written in 1989, when not as many people owned personal computers and even fewer people had online access. In any case, the meme is a cultural equivalent to the gene. And when further analyzed through computer science metaphors, whereas a gene is a compact piece of software (wetware?) that provides a specific function in the ongoing process known as life, the meme is a similarly compact piece of cultural programming that can provide useful functions such as things as fundamental as how to raise a child, or how to cook, or slightly more technologically advanced such as how to read and write, something even more complex like the scientific method, or some other things perhaps less useful, but maybe more sublime such as blogging or webapp development. With the prevalence of the Internet, the daily exchange of memes can be quantifiably tracked—especially with existence of such things as del.icio.us and the meme is almost a palpable thing—well as palpable as anything can get on the Internet.

Anyway, the reason why I ventured off on that excursus is because what Yegge is labelling as “marketing” is perhaps how culture gets transmitted in the first place. This is one of the things that makes us uniquely human, something we’ve been doing long before the rise of the Internet, hell, before the existence of Capitalism, and even predating the existence of alphabets and syllabaries.

The reason why the technically superior technology rarely wins is because human brain function does not operate by mathematical logic alone. Various authors have been writing about a new way of understanding human brain function, the foremost being Antonio Damasio [Wikipedia entry] who, among other works, wrote the book Descartes’ Error which discusses the long-standing fallacy of trying to separate logic and emotion into two disconnected subsystems operating in the brain. Other pioneers in this field include Elkhonon Goldberg and Oliver Sacks [Wikipedia entry] who have both written popular works that discuss current research delving into neuropsychobiology, the upshot of which is that what we consider normal human reasoning does not exist if emotion is disconnected from the system—a fact that can be attested to if you know someone who has an autistic spectrum disorder, or probably rarer, bizarre focal brain lesions that specifically wipe out right brain function while leaving the left brain intact.

In other words, what Yegge seems to call “marketing” is really the transmission of memes using both channels of reason and emotion (and I realize that I am partaking in the very fallacy that Damasio describes by separating them out like that.) In fact, he specifically discusses how Perl won over a lot of people precisely because of the touchy-feeliness of the Perl community, and how it seems that the Python community is alienating a lot of people by being more aloof.

posted by Author's profile picture mahiwaga

Gödel’s incompleteness revisited

What my last post gets me thinking about it the reason why emotion is necessary for proper human thought, and I think the reason is the fact that life is filled with uncertainty. The irony is that the scientific method has come to this exact same conclusion, formalized physically as Heisenberg’s Uncertainty Principle and mathematically as Gödel’s Theorem of Incompleteness, and it really isn’t until recently that the selective advantage of emotion has been discussed in a scientific manner.

I think what emotion allows us to do is process uncertainty properly. And I’m not just talking about using intuition. I’m talking about being able to handle mathematical unknowns, which I think is a uniquely human characteristic. A simple way to think of this is that emotion allows us to (subconsciously) quantitatively weigh unknowns. Is this variable important or not, for example.

Now, whatever tricks we program into our machines, we can’t quite really get computers to do this yet. A computer program will generally halt until the answer to an unknown variable is known. In advanced microprocessors that can handle pipelining, you don’t have to calculate absolutely everything before the program can run, but if one thread needs an answer and the other thread that will provide it hasn’t finished calculating it yet, the original thread will just hang there until there is a value available.

I don’t think we’ve gotten to the point where we can keep a program running even if the answer to a particular variable is yet unknown. We can program in various tricks to allow a program to not need that value until it has been calculated, but pretty much all programs will crash if you feed it an undefined value. But in stark contrast, human beings handle and run with undefined values all the time. I mean, how often have you been in a situation where you knew the value of all the variables involved? Where you have absolutely no uncertainty in what is going to happen next? This is, like, never, and I think our ability to emotionally weigh unknowns is the only thing that keeps us from freezing up all the time. Without the ability to emotionally weigh unknowns, we would never be able to make decisions (and interestingly, this is exactly what happens to people who have their emotional subsystems disconnected from their reasoning loop—they can’t make decisions to save their life.)

There is an idea out there that the way we handle uncertainty is through quantum mechanical means. Every undefined value is simply a wave function that hasn’t collapsed yet, and our brains are just gigantic quantum computers that somehow figure out the state of reality through the collapsing of wave functions. I’m not sure if we really need to get quantum uncertainty involved in human emotional reasoning. I think that the brain still does things mechanistically. Since the brain is the product of millions of years of evolution, it is probably programmed with various default values for certain things. What allows us to learn, however, is the fact that the brain is also a storage system and a pattern recognition engine. Maybe the brain does hyperthread like a microprocessor does, and has threads that hang until values are calculated for various unknowns. The thing is, the brain doesn’t need a correct value to continue processing. All that matters is that there’s some kind of value returned, and maybe sometimes that value is simply the default value that is genetically encoded. This works because evolution has fined-tuned our systems to gel very well with reality, even if our consciousness loop isn’t really aware of how it works. The reason I bring up the storage device and pattern recognition engine analogies is that because of these functions, the brain doesn’t have to rely on genetically-derived defaults. What probably happens is that the brain will recognize it is in a situation it has never encountered before, will search its stored memories to see if there’s something even remotely similar to this situation (and since reality tends to be more monotonous than not, and since evolution has fine-tuned our thresholds for monotony and uniqueness to correspond roughly with reality, chances are the brain will find a match. Then again, some of us can go through life associating things rather freely) and it will use this previously calculated value as a default instead. The thing is, since we are constantly, literally, second-by-second being faced with decisions and interactions and reality in general, we quickly build a very rich database of memories. (And, by the way, it’s no accident that we really don’t reach our peak intellectual function until a couple decades of absorbing reality.) With every successful match with reality, the associations strengthen. We learn.

(As another aside, I think the challenge for a successful Turing Machine is being able to handle Gödelian incompleteness with the aplomb that the average human being does.)

Where does emotional processing come in? Well, it doesn’t require us to handle all this decision making in our consciousness loop. The memory retrieval and pattern recognition probably occur at a relatively low level. What makes us differ markedly from a computer is that the consciousness loop doesn’t need to actually know what the exact value the brain has retrieved. Instead of transmitting a full, high-level, fleshed out picture of the data these subsystems have retrieved and/or calculated to the consciousness loop, the subsystems just have to emit emotional signals. Remember that axon transmission times are pretty slow, often measured in seconds (this is in contrast to computer devices, which frequently measure signal transmission rates in milliseconds, microseconds, and nanoseconds.) Emotion enables us to function effectively in real-time. Imagine if you had to make all decisions at the conscious level. I can tell you that this wouldn’t work very well in the emergency room or on the battlefield.

posted by Author's profile picture mahiwaga

evolution and worse-is-better

Again, perusing posts about computer systems implementation, I come upon the debate between “the right thing” and “worse is better,” I can’t help but think about the way natural selection works.

The reason why I think intelligent-design is dead in the water is because natural selection has engineered things that are clearly not “the right thing” (using the connotation as described in the article I cited—i.e., a definition limited to systems implementation.) The one that comes to mind easiest is the way the human retina is designed. For those who haven’t taken anatomy, the retina is the light-sensing organ that sits in the back of the eye, which basically converts photons into action potentials, that is, electrochemical nerve signals that are the lingua franca of the brain. It works pretty much like how film (or, to be more modern, a CCD) works. But the bizarre thing is that the light-receptive neurons transmit their signals to higher-level neurons that are actually in front of them, meaning that the light-receptive neurons are actually partially obscured by a tangle of neurons. Clearly, a sane engineer would not do this, and, in fact, in other organisms, this is not the case—a more sane design that puts the light receptors in front and the higher-level neurons behind them was implemented instead.

Now, mind you, both implementations do what they’re supposed to do. You and I can still see all right even though we have a “worse-is-better” implementation. And clearly, evolution also demonstrates that this isn’t necessarily the optimal design, because other organisms have the saner “right-thing” implementation. I just can’t wrap my mind around the idea that a sentient being would actually implement sight both ways. But what do I know, maybe God has a sick sense of humor.

But back to “worse is better” and evolution. There are clear reasons why natural selection might favor the easier implementation, the biggest of which is entropy. In the design of organisms, more features does not often give one a selective advantage.

The simplest example is antibiotic resistance. One would think that all bacteria would evolve antibiotic resistance since it would ensure their (individual) prolonged survival. But, thanks to thermodynamics, there is cost for harboring an antibiotic resistance gene. Just by sheer kinetics alone, a bacteria that has such a gene will not replicate as fast as an antibiotic-sensitive species. This fact alone has probably saved us despite our continuous abuse of antibiotics. What probably happens (no one has proved this experimentally) is that the antibiotic-sensitive species simply reproduce quicker and outcompete their antibiotic-resistant brethren for nutrients. Hence, we can live infection-free lives despite constantly using triclosan-containing handwash.

Hence, the reason why some organisms have sane retinas and why we have ass-backward retinas: if we have evolved from older but similar implementations of life, the laws of thermodynamics would favor tinkering with the design instead of completely reinventing it. So maybe somewhere deep in the evolutionary past, some stray cosmic ray reversed the way retinas grew in the eye in some organisms, and since this did not create a significant selective disadvantage, both implementations co-exist.

posted by Author's profile picture mahiwaga

desktop blogging: blog thing

I’m testing out Blog Thing which is a simple Cocoa app that supports the Metaweb API. Ah, the wonders of the Web (version 2.0)

ADDENDUM: Hey it works! Hmm, it apparently doesn’t let me type HTML directly however, which is unfortunate, because I’ve gotten used to doing this while I was still blogging using my mish-mash kludge consisting of a Makefile, Perl scripts, and XSLT. Well, at least the web-based text editor lets me do what I want to do. Time to find another desktop blogging client.

Powered by Bleezer

posted by Author's profile picture mahiwaga

desktop blogging continued: bleezer

So now I’m trying Bleezer which is written in Java. Ah well, no Cocoa for me, I guess. But, this, on the other hand, has a lot more features, many of which I will probably never get to use. Neat.

Powered by Bleezer

posted by Author's profile picture mahiwaga