Jun 222004
 

Most of the people who sub­mit­ted papers to the XML 2004 Con­fer­ence will have heard by now wheth­er their talk was accep­ted, waitl­is­ted, or rejec­ted. Pick­ing the papers is quite an involved pro­cess; since the qual­ity of the con­fer­ence depends on the qual­ity of the papers it’s also an import­ant pro­cess. Every con­fer­ence picks papers in a dif­fer­ent way; here are some notes on how the con­fer­ence I chair does this.

First, the num­bers. This year the con­fer­ence is half a day short­er and so we could only take 94 papers. A day before the sub­mis­sion dead­line, we had about 70 sub­mis­sions in the data­base. By the time the dead­line arrived, we had over 250 sub­mis­sions in 15 areas. For­tu­nately only one review­er needed to get assigned papers early!

All the papers were assigned to the 99 review­ers, tak­ing into account review­er interests and poten­tial con­flicts of interest (work­ing for the same com­pany etc), mak­ing sure that each paper was assigned to at least 5 review­ers, and each review­er had between 10 — 15 papers to review. The web-based sys­tem I designed is simple and does­n’t show the review­ers the speak­er inform­a­tion, so it’s a blind review sys­tem (assum­ing the speak­er does­n’t put their info in the abstract!)

After the review­ers had fin­ished, the Plan­ning Com­mit­tee had a selec­tion meet­ing to pick the papers. We had a num­ber of criteria:

  1. pick good papers
  2. bal­ance the pro­gram, so there are papers on all the top­ics of interest to our audi­ence and not too many on any one subject
  3. broaden the speak­er base to ensure a range of opin­ions and knowledge

The grades and com­ments from the review­ers were essen­tial in this pro­cess. I can­’t ima­gine try­ing to select papers without the help that the review­ers give. I also don’t think it would be as good a res­ult — James Surowieck­i’s piece in Wired called Smarter Than the CEO gives some good reas­on­ing as to why group decisions are often bet­ter than indi­vidu­al ones. The Plan­ning Com­mit­tee looked at the grades and the com­ments and read the abstract for every talk that was sub­mit­ted, and in gen­er­al we took the highest-scor­ing talks, while tak­ing the oth­er cri­ter­ia above into account.

And so we ended up with 94 papers that were accep­ted, we waitl­is­ted the next highest-scor­ing talks in each top­ic area, and we had to reject the rest for this con­fer­ence. Reject­ing talks is always hard — it’s often the case that the author wrote a good abstract on an inter­est­ing top­ic, but someone else wrote one on a sim­il­ar top­ic that was just that bit more inter­est­ing to the review­ers (who, after all, are rep­res­ent­at­ive of the audi­ence). Some of the abstracts, of course, wer­en’t very good — they were too short, or too vague, or did­n’t describe what the speak­er inten­ded talk­ing about, but rather why the con­fer­ence should have a talk on that sub­ject. Since the review­ers did­n’t have speak­er inform­a­tion, they made their judge­ment solely on the qual­ity of the abstract. 

We have a form­al waitl­ist for a num­ber of reas­ons. Speak­ers often can­cel (some­times at the last minute) and we want a high-qual­ity talk for those speak­ing slots. We also keep a few speak­ing slots open for late-break­ing news which are filled in early Octo­ber. If we don’t get enough good talks for those slots, the waitl­is­ted speak­ers will get them. 

It looks like a good set of talks this year; should be an inter­est­ing conference!

Sorry, the comment form is closed at this time.

/* ]]> */