Nov 212007
 

I’m still fight­ing off a bad cold, so blog­ging has been light and will be over the next couple of days. Things have still been hap­pen­ing in the world though… or at least in the part of it I inhabit.

Rain­city Stu­di­os bought Bryght — the Bryght guys (Bor­is, Roland, Kris, Richard) have been major con­trib­ut­ors to the North­ern Voice blog­ging con­fer­ence I’m involved with. It’ll be inter­est­ing to see how the merged com­pany evolves. More details here.

Sup­posedly the dis­aster that’s Cam­bie St south of the Broad­way will be open for traffic again on Decem­ber 9; giv­en the past track record I must admit I’ll believe it when I see it.

I’m not going to touch on the cur­rent police/RCMP stor­ies as there are enough people talk­ing about those and I doubt that I can add any­thing that oth­ers haven’t already said.

The sun is shin­ing, for a change. Fore­cast is for rain over the week­end, of course, but it’s nice to see blue skies and sun­shine even if only for a couple of days (maybe espe­cially if it’s only for a couple of days).

Nov 142007
 

I just got back from drop­ping off three mon­it­ors (two big and bulky, the third does­n’t work), two PC boxes (one works: a friend was upgrad­ing so I offered to get rid of the old one, the oth­er does­n’t even turn on any more), two key­boards, a 9600 baud modem, an old router, and assor­ted asso­ci­ated cables, at Free Geek Van­couver, a lovely organ­isa­tion that cheer­fully takes com­puter hard­ware, work­ing or not, obsol­ete or not. They reuse what can be reused (installing Ubuntu + soft­ware and donat­ing to a good home), and recycle what can­’t be reused. 

Reusing or recyc­ling old com­puters does­n’t solve all the envir­on­ment­al prob­lems caused by the com­put­ing industry but it’s at least a start. 

Nov 132007
 

I used to, some­times, go to a con­fer­ence in Montreal in August called Extreme Markup Lan­guages. Some­times a bit over-the-top geeky for me, but mostly a good exper­i­ence. There’s now a new con­fer­ence in Montreal in August, run by some of the same people and chaired by Tom­mie Usdin from Mul­berry Tech­no­lo­gies, called Bal­is­age (the reas­on for the name is on that web site). It prom­ises to have much the same sort of XML-related geekiness.

The con­fer­ence com­mit­tee invited me to be on the Advis­ory Board, and I’ve signed up as a peer review­er. Montreal in August, per­haps even with the fam­ily in tow, with some fun top­ics to geek out over, sounds appealing.

Nov 132007
 

There are some issues with Web 2.0, mostly in the areas of pri­vacy, secur­ity, copy­right — all those things you’d rather you did­n’t need to worry about. Take pri­vacy for example. On many social net­work­ing sites people sign up and then put in all their per­son­al inform­a­tion simply because there’s a field there for it. Often those pro­files are pub­lic by default, rather than private, and often they’re open to search engines as well. So people think their inform­a­tion is private and then dis­cov­er it isn’t, and have to go search­ing through menus to find out how to turn on those pri­vacy fil­ters that are turned off by default. In many cases what’s good for the site own­ers isn’t neces­sar­ily good for the users. One big factor in Flick­r’s early suc­cess was the fact that uploaded pho­tos could be seen by the world unless spe­cific­ally made private, and lots of users did (and still do) get con­fused by copy­right issues (cre­at­ive com­mons licenses don’t solve the issue of what “pub­lic domain” etc actu­ally mean).

Then there’s the per­sona issue. I might have a leg­al but slightly embar­rass­ing hobby that I don’t want work know­ing about. So I need to set up a sep­ar­ate online iden­tity for that — people need to think about the implic­a­tions of this in advance if they don’t want cor­rel­a­tions of that hobby per­sona with their “real” one on the basis of an address or phone num­ber or email.

Oth­er prob­lems with the pleth­ora of new Web 2.0 social net­work­ing sites: they often don’t under­stand what pri­vacy and user con­sent mean. You sign up for some­thing, they ask you to upload your address book to see wheth­er oth­er friends are already there, the next thing you know they’ve done spam-a-friend and emailed every­one in your address book without your know­ledge, let alone your con­sent. Or they ask you to give them your user­name and pass­word to some oth­er social net­work­ing site under the “trust us, we will do no evil” motto (whatever happened to “trust but verify”?).

There are some solu­tions to this: users have to be care­ful about the inform­a­tion they hand out (fake birth­d­ates, any­one?) and start demand­ing that sites take care of their inform­a­tion. If I want to hand out inform­a­tion to the world, that’s my decision, but it should­n’t be up to some web site to make that decision for me.

The last of a series on Web 2.0, taken from my talk at the CSW Sum­mer School in July 2007. Here’s the series introduction.

Nov 122007
 

The third aspect of Web 2.0, which is often under-appre­ci­ated, is the pro­cess aspect. This has changed people’s expect­a­tions of what soft­ware can do, and how it should be delivered. This cat­egory includes open source, con­tinu­al beta and quick release cycles, and some new busi­ness models.

Process CloudPro­cess Cloud

Not all of the things that are import­ant in Web 2.0 are new, of course. Open Source soft­ware has been around for a long time, but I would argue that it has nev­er been as pop­u­lar as now, where more people have the abil­ity to con­trib­ute their time and tal­ent to pro­jects for which they’re not dir­ectly paid (unless they’re lucky enough to work for a com­pany that sup­ports such projects). 

The con­cepts of con­tinu­al beta and quick release cycles are new though. It was­n’t that long ago that you could only buy con­sumer-level soft­ware in boxes with pretty pic­tures and prin­ted manu­als, either in stores or by call­ing com­pan­ies. For expens­ive soft­ware that needed con­sult­ing ser­vices to install and con­fig­ure sales reps would vis­it if you worked for a large enough com­pany. To take part in a beta pro­gram you needed to know someone who worked in the com­pany and sign an NDA, and it was a small, tightly-con­trolled circle.

These days the Web 2.0 browser-based applic­a­tions don’t need hand-hold­ing to install and con­fig­ure, so the serv­er load is the big con­straint on how many people can take part at once. There are sev­er­al fla­vours of beta pro­grams: invite some “thought lead­ers” and ask them to invite their friends in the hope they’ll blog a lot about it (Gmail did this, you got 6 invites, then 50, then you could invite 250 of your closest friends to take part, most of whom already had gmail accounts); unlim­ited invites start­ing with a small circle; sign up on a wait­ing list; allow in any­one from cer­tain com­pan­ies (dopplr does this, with the twist that the mem­bers can then invite any­one they like).

The “con­tinu­al beta” bit comes from the fact that these applic­a­tions are updated quickly; these updates are often tried out on some of the users before being rolled out to all. Flickr appar­ently had hun­dreds of incre­ment­al releases in 18 months from Feb­ru­ary 2004 to Octo­ber 2005 (stated in O’Reilly’s Web 2.0 Prin­ciples and Best Prac­tices; I could­n’t find an online ref­er­ence oth­er than that report). The line between a beta and a non-beta applic­a­tion seems to be a fine one; the only dis­tinc­tion in many cases that the user can see is the word “beta” on the web site. Con­tinu­al releases give users a reas­on to come back often, new fea­tures can be tested and fixed quickly. Of course, this sort of sys­tem does­n’t really work for fun­da­ment­al soft­ware such as oper­at­ing sys­tems, data­bases, browsers, iden­tity pro­viders, and dir­ect­ory ser­vices, where you want full-on secur­ity and regres­sion test­ing, but it does work for the Web 2.0 applic­a­tions that run on those bits of fun­da­ment­al software.

And in keep­ing with the user-cre­ated ten­ets of Web 2.0, plat­forms such as Face­book that enable third-party developers to write applic­a­tions to run on that plat­form also ful­fill the func­tion of con­tinu­ally adding fea­tures to the applic­a­tion without the own­ers need­ing to code any­thing, or pay people to add fea­tures. The users do it all for them — use the plat­form, add fea­tures to the plat­form, mar­ket their added fea­tures. The own­ers sup­ply the hard­ware and the basic infra­struc­ture (which needs to be stable and reli­able) and the users do the rest. At least, that’s the the­ory and the hope.

Which brings us to the busi­ness mod­els. How do people pay for the hard­ware, soft­ware, pro­gram­mers, mar­ket­ing? There are a num­ber of ways in which Web 2.0 com­pan­ies try to cov­er the bills for long enough to sur­vive until they can be acquired by some big­ger com­pany. One is advert­ising. Google and its com­pet­it­ors have made it easy for even small web sites, such as blog­gers in the long tail, to make some money from ads. It’s more than enough to pay the bills for some sites, since it’s now cheap or free to build and launch a site. Some sites are free when you watch the ads, but you can pay for an ad-free ver­sion. Or free for private use, but cost some­thing for com­mer­cial use. And then there’s the vari­ant where a basic account is free, but you have to pay if you want more fea­tures, such as upload­ing files, or upload­ing more than a cer­tain num­ber of pho­tos. A vari­ant for open source soft­ware is that the soft­ware is free, but you need to pay for sup­port or real help in con­fig­ur­ing it, or to get new releases more quickly.

One of a series on Web 2.0, taken from my talk at the CSW Sum­mer School in July 2007. Here’s the series intro­duc­tion. Com­ing up next: some issues with Web 2.0

/* ]]> */