Why don't ML conferences provide reviewer instructions?
I remember when I first received an invitation to review papers for an ML conference in late 2020. What surprised me most was not that I was being invited (even though that was a surprise, since I was just a second year PhD student who had only just completed writing a paper myself). Instead, it was the lack of instruction of how to assess the papers: essentially just "write your reviews by date X", and "evaluate novelty, significance, soundness, etc". In fact, in all the years since, I think I have never received explicit instructions for reviewing ML conference papers.1
Now, ~4 years after receiving that initial review invitation, I have a bit of a better understanding of why there are no instructions.
- There are a huge variety of papers, and it is essentially impossible to write down a decision tree which applies to all of them (eg, theory papers and purely experimental papers probably need to be evaluated differently).
- If there were guidelines, they would probably need to change every year as the field advances.
- There is genuine disagreement in the community about what kinds of papers should be accepted.
- There is an assumption that people invited to review "already know how to review" (eg I was only invited after being on a published paper rather than at the start of my PhD)
However, even though I think these arguments have merit, I still think that more detailed guidelines would be helpful. The high fraction of relatively new entrants into the ML conference space means that most people don't have the same kind of shared experience and common standards that were probably present ~10 years ago. As a starting point, I will try to write up my own personal guidelines in the coming ~1 week. Stay tuned!
-
The only exceptions are some ML conference workshops and TMLR, which is not a conference and was made explicitly to use different acceptance criteria. ↩