I am on the FOCS 2022 Program Committee, and I am overjoyed that the PC meeting will be virtual. Hopefully, the days are over when the program chair can brush aside all cost-benefit considerations, impose their backyard on far-away scholars who need service items, and then splash their offspring at the welcome party.
I am also overjoyed that we will be implementing double-blind reviews. This issue has been discussed and ridiculed at length. Admittedly, it makes it harder to adhere to Leonid Levin’s 1995 influential STOC Criteria. For example, if a reviewer wanted to trash a paper based on the fact that the authors are not in the position to judge their own work, now they’ll have to check online for talks or preprints to know who the authors are. Given the volume of reviews, it’s reasonable to expect that in some cases the reviewer won’t be able to conclusively exclude that a letter-writer is among the authors. In such a situation they can resort to writing a very long, thorough, and competent review whose most significant digit is the STOC/FOCS death sentence: weak accept.
No, I actually do have something more constructive to say about this. I was — as they say — privileged to serve on many NSF panels. As an aside, it’s interesting that there the track-record of the investigators is a key factor in the decision; in fact, according to many including myself, it should carry even more weight, rather than forcing investigators to fill pages with made-up directions most of which won’t pan out. But that’s another story; what is relevant for this post is that each panel begins with a quick “de-biasing” briefing, which I actually enjoy and from which I learnt something. For example, there’s a classic experiment where the ratio of females hired as musicians increases if auditions hide the performer behind a tent and make them walk in on carpet so you can’t tell what shoes they are wearing. Similar experiments exist that hide names from the folders of applicants, etc. What I propose is to do a similar thing when reviewing papers. That is, start with a de-biasing briefing: tell reviewers to ask themselves whether their attitude towards a paper would be different if:
- The authors of this paper/previous relevant work were ultra-big shots, or
- The authors of this paper/previous relevant work were hapless nobodies, or
- The proofs in this paper could be simplified dramatically to the point that even I understand them, or
- This result came with a super-complicated proof which I can’t even begin to follow, or
What other questions would be good to ask?