The technical component of Web 2.0 includes XML, Ajax, APP, various programming languages, plug-ins and widgets, and the REST architecture. All of these have a role to play in supporting the web sites that incorporate Web 2.0 features, while many predate the Web 2.0 phenomenon. There are far too many interesting technical features for me to talk about all of them in one post, of course, but this post should at least introduce you to some of the more interesting acronyms.
Obligatory tag cloud: this one contains some technical terms
Developing Web 2.0 applications is easier than developing large enterprise-style applications. The developer toolkits are a lot easier to use, and it’s much faster to create something. 37 signals, who make Basecamp, amongst other tools, say they put it up in four months with 2.5 developers using Rails, a development framework. For developers there’s now a range of language options, from PHP to C++ or JavaEE, with newer platforms and languages like Ruby and Rails grabbing mindshare as well. People can program in the system they’re comfortable with, and although there’s a certain amount of snooty disparagement of each language from proponents of some other one, what matters in the end is using the right tool for the job. I’ve seen bad code written in Java and good code in PHP, and a system that does less but does it well is preferable to my mind to one that does a lot really badly.
Ajax (Wikipedia link) is another important Web 2.0 technology. It’s really a shorthand to describe a bunch of technologies (HTML, CSS, DOM, JavaScript) that are tied together, using the browser to create a richer environment by tying in scripting and a way to request information from the server without forcing the entire page to be reloaded. It’s powerful and interactive and can be much faster than other methods of adding interactivity to the web pages. There are lots of books on the subject, which is a reasonable indicator of the interest in it.
Since it combines a lot of different applications, debugging can be a problem. Some basic rules that I’ve found useful are: first make sure your HTML/XHTML validates, then make sure your CSS validates, then use Firefox with the Firebug extension to debug the rest. Once you have that working, you can make the changes for other browsers as appropriate.
Poorly written Ajax does have some problems, such as not being able to bookmark results, or the back button not going back to the right place. The big problem is the non-standardized XMLHttpRequest object in JavaScript, the object that lets your page talk to the server and get the right information. The way it works varies between different browsers and different versions of the same browser (IE 6 to IE 7, for example). Although W3C is starting to work on standardizing it, that will take some time. Another problem is the “A” in Ajax — it’s asynchronous, which means that internet latency can be an issue.
These problems can be solved — there are Ajax toolkits available which hide the XMLHttpRequest and other browser incompatibilities, some applications have figured out the back button and the bookmarking URL issues, the asynchronous issues can be dealt with by breaking the applications up into small segments which take into account the fact that the other end may never respond. And as a result of these toolkits and techniques, Ajax is now a major component of many websites, even those that aren’t for Web 2.0 startups.
REST is an architectural framework that explains a lot of why the web is so successful. Roy Fielding’s PhD thesis was the first place where it was codified (and he coined the term). Basically the idea is that everything that you can reach on the web should be a resource with a web address (URI) that you can reach with standard HTTP verbs, and that will have other URIs embedded in it. There’s more to REST, of course, and I’m sure the purists will take issue with my over-simplified description.
REST is widely used in what I call Ajax APIs — the APIs that various applications have that let people get access to the data. Mash-ups, where you take data from one service and combine it with another service, use these APIs all the time. The classic example of a mash-up was to take Craigslist rental data and mash it with Google mapping data onto a third web site (housingmaps) without Craiglist or Google being involved to start with. There are now vast numbers of mash-ups and lots of toolkits to help you create them. One problem with mash-ups is that the people providing the data may not care to have you take it (for example, if they run ads on their sites); the Web 2.0 solution to that is that if you own the data, you need to add more value to it that can’t be mashed as easily. Amazon has book reviews on top of the basic book data, for example, so people use Amazon as a reference link.
The concept of mash-ups goes further into platforms that support plug-ins and widgets. One of the appealing things about Facebook is the fact that application developers can write widgets to do various things (from the trivial to the heavy-weight) that use the information that Facebook provides (this has privacy implications, but more about that in a later post). In a sense, this is about sites (usually commercial sites) using the social aspect of Web 2.0 (user-created content) to provide more features to their users, and is tightly tied to the process implications of Web 2.0 (more about that in the next post).
The Atom Publishing Protocol is fairly recent. Atom is the cleaned-up version of RSS and gives you a feed of information, tagged with metadata such as author, published date, and title. There is now also a protocol to go with it, designed for editing and publishing web resources using HTTP. It can be used as a replacement for the various blog-based publishing APIs, which were used to allow people to post to their blogs from different editors, but it’s now obvious that it can be used to carry other information as well, and not just for blogs. Since it’s a REST-based API that uses basic HTTP, it can be used for more general client-server HTTP-based communication. A good overview is on the IBM developer site.
One of a series on Web 2.0, taken from my talk at the CSW Summer School in July 2007. Here’s the series introduction. Coming up next: process aspects of Web 2.0