Why don't ML conferences provide reviewer instructions?

I remember when I first received an invitation to review papers for an ML conference in late 2020. What surprised me most was not that I was being invited (even though that was a surprise, since I was just a second year PhD student who had only just completed writing a paper myself). Instead, it was the lack of instruction of how to assess the papers: essentially just "write your reviews by date X", and "evaluate novelty, significance, soundness, etc". In fact, in all the years since, I think I have never received explicit instructions for reviewing ML conference papers.1

Read more…

Alpha over LLMs

On a recent podcast, Patrick McKenzie mentioned the idea of "alpha over LLMs": does a publisher produce text with any meaningful advantage over asking an LLM? I think this is an important question for anybody trying to regularly write, even if the readership is small (eg this blog). I interpret this as:

  • People should not put out content which is obviously wrong and can be corrected by an LLM (eg "I have theory X" where asking an LLM provides clear and convincing counter-arguments to X).
  • People should also not put out content which is worse than the answer you get from asking an LLM (eg the same content but explained less clearly).

I will generally try to uphold this principle in future blog posts.

Is offline model-based optimization a realistic problem? (I'm not convinced)

This is a "quickpost": a post which I have tried to write quickly, without very much editing/polishing. For more details on quickposts, see this blog post.

Offline model-based optimization (OMBO in this post) is essentially 1-shot optimization using a fixed dataset. You see data, do whatever you want, then propose a batch of query points, which are then evaluated. Hopefully, one (or most) of the query points are optimal (or near optimal). End of task.

Read more…