Josh Catone writes against the existence of Web 3.0, arguing that the version numbers don’t really depict any specific discontinuities the way that real major version changes would.
This is true, but it doesn’t mean that Web 1.0, 2.0, and 3.0 aren’t real entities. It can get confusing with the proliferation of buzzwords, all of which only obfuscate the relevant differences.
- The Internet is the platform
- Harnessing the collective intelligence
- Data as the “Intel Inside”
- Software above the level of a single device
- Software as a service
— Tim O’Reilly’s definition of Web 2.0
I’ve been saying that it’s when we apply all the principles we’re learning about aggregating human-generated data and turning it into collective intelligence, and apply that to sensor-generated (machine-generated) data. — Tim O’Reilly’s definition of Web 3.0
Not exactly the clearest differentiation, really.
Having been a netizen since 1994 as an end-user, never really as a serious developer, I would argue and say that there are meaningful distinctions. They’re not so much versions but really layers. It’s the same way that the web browser sits on top of the OS, and the OS sits on top of a kernel and the underlying hardware. (Unix users would argue that what we commonly call the OS is probably more accurately called the Desktop Environment: the GUI and the underlying frameworks/API.)
I imagine Web 1.0 most likely refers to the so-called Eternal September, when the unwashed masses discovered Geocities, Tripod, and Xoom and started posting Personal Home Pages. This era was dominated by a top-down approach. Authors (from your grandmother to university libraries and the NIH) would post random material, and people would try to navigate hierarchical, human-generated directories to find this material. If you weren’t on the index, it was nigh-impossible to find anything. Sure, all the major search engines were spawned in this era (Yahoo, Lycos, Hotbot, Infoseek, Altavista, Inktomi, and lastly Google were already in play by 1998), but search was still in its infancy, and the human-generated indices were far more useful.
Web 1.0 was a one-way conversation, with people posting stuff to the ether. And while e-mail (and a little later, IM) provided some sort of back-channel, this was it, there weren’t no mo’.
Web 2.0 arose with the coming of the blog. This immediately shifted the dynamic to a two (or more) way conversation. Comments, trackbacks, blogrolls, Google juice, Google bombing. The index was no longer determined solely by expert editors. Pagerank gave the HTML/Javascript-savvy blog writer a say.
XML spawned XML-RPC and SOAP. Amazon and Google and then everyone released their APIs. The era of the web service was on.
AJAX didn’t really come to the fore until Google Maps came out in 2005, but the enterprise had been deploying apps utilizing XmlHttpRequest for a while by then, including the ER at one of the hospitals I work at.
True apps, mimicking the behavior of desktop apps, now exist. Stand-alone desktop apps catering to web services (also known as site-specific browsers) are all the rage.
So what is Web 3.0? One of the points that O’Reilly depicts is “software above the level of a single device.” We continue to evolve away from platform-dependence and browser-dependence. While in Europe and Asia, people have already been accessing the Web from their mobile phones, the U.S. has finally caught up. With the advent of the Blackberry and now the iPhone, standards compliance is no longer just a theoretical pie-in-the-sky. Non-standard code will cost you eyeballs, and lost eyeballs will cost you money.
While O’Reilly points out what I like to call the Central Dogma of HTTP—the transmission of data from server to client filesystem to client appliance. An example is the iTunes Music Store. iTunes lets you download music to your hard drive, and then you can transfer it to your iPod. The direction of transmission can be reversed, however. You can upload the pictures from your digital camera to iPhoto, and iPhoto can upload everything to your .Mac account.
These are all devices that don’t have direct access to the Internet. The iPhone has broken this distinction. So have things like Apple TV (albeit this is not as popular.)
So part of Web 3.0 is ubicomp, or at least the precursor to it.
The other point O’Reilly makes that will be a definite feature of Web 3.0 is the consumption and processing of data by non-human entities. This already happens to a degree, with the ongoing computation of Pagerank, and the various automated algorithms that try to determine the most interesting pages on the web. This also happens with targeted advertising—with AdSense, with Amazon and iTunes recommendations.
The deployment of autonomous agents to filter the deluge of information coming our way will become more routine, more customizable, more multi-modal. You’ll be able to take a picture of something in meatspace and ask Google what it is, or eBay if there are any more for sale. You’ll be able to hum the first five notes of a song, or sing a part of a song lyric, and iTunes and last.fm will tell you what it is, what album it’s on, and who makes music that is similar.
You’ll be able to tell Yelp where you are—in real-time—and an automated algorithm, primed by enthusiastic Yelpers, will tell you where you should go next. Facebook can send you an alert when one of your friends is nearby, or when one of your ex’s is nearby.
So the next level involves persistent processes and programs roaming the net and accumulating data that is of specific interest to you. Spiderbots on steroids, programmed specifically by you, without even writing a single line of code. It won’t be just about typing in search terms any more.
Parallel, and necessary, to these developments is the continued evolution of the so-called Semantic Web. We’ve started with tagging and folksonomies. Soon we’ll have automated algorithms that will be able to do a reasonable job of performing some of this as well. Everything will have an XML representation—the spime will reign supreme.
If the evolution of Web 1.0 and of Web 2.0 can serve as any guide, it is likely that it will take another 5-10 years before we will recognizably be in a Web 3.0 world. On the other hand, technology tends to auto-catalyze its evolution, so maybe it’ll be quicker, although the economic slowdown will likely stymie that.