Most of the people who submitted papers to the XML 2004 Conference will have heard by now whether their talk was accepted, waitlisted, or rejected. Picking the papers is quite an involved process; since the quality of the conference depends on the quality of the papers it’s also an important process. Every conference picks papers in a different way; here are some notes on how the conference I chair does this.
First, the numbers. This year the conference is half a day shorter and so we could only take 94 papers. A day before the submission deadline, we had about 70 submissions in the database. By the time the deadline arrived, we had over 250 submissions in 15 areas. Fortunately only one reviewer needed to get assigned papers early!
All the papers were assigned to the 99 reviewers, taking into account reviewer interests and potential conflicts of interest (working for the same company etc), making sure that each paper was assigned to at least 5 reviewers, and each reviewer had between 10 — 15 papers to review. The web-based system I designed is simple and doesn’t show the reviewers the speaker information, so it’s a blind review system (assuming the speaker doesn’t put their info in the abstract!)
After the reviewers had finished, the Planning Committee had a selection meeting to pick the papers. We had a number of criteria:
- pick good papers
- balance the program, so there are papers on all the topics of interest to our audience and not too many on any one subject
- broaden the speaker base to ensure a range of opinions and knowledge
The grades and comments from the reviewers were essential in this process. I can’t imagine trying to select papers without the help that the reviewers give. I also don’t think it would be as good a result — James Surowiecki’s piece in Wired called Smarter Than the CEO gives some good reasoning as to why group decisions are often better than individual ones. The Planning Committee looked at the grades and the comments and read the abstract for every talk that was submitted, and in general we took the highest-scoring talks, while taking the other criteria above into account.
And so we ended up with 94 papers that were accepted, we waitlisted the next highest-scoring talks in each topic area, and we had to reject the rest for this conference. Rejecting talks is always hard — it’s often the case that the author wrote a good abstract on an interesting topic, but someone else wrote one on a similar topic that was just that bit more interesting to the reviewers (who, after all, are representative of the audience). Some of the abstracts, of course, weren’t very good — they were too short, or too vague, or didn’t describe what the speaker intended talking about, but rather why the conference should have a talk on that subject. Since the reviewers didn’t have speaker information, they made their judgement solely on the quality of the abstract.
We have a formal waitlist for a number of reasons. Speakers often cancel (sometimes at the last minute) and we want a high-quality talk for those speaking slots. We also keep a few speaking slots open for late-breaking news which are filled in early October. If we don’t get enough good talks for those slots, the waitlisted speakers will get them.
It looks like a good set of talks this year; should be an interesting conference!
Sorry, the comment form is closed at this time.