*I had drafted this earlier this summer but never got around to finishing and posting this. Am doing so now*
I just finished reviewing for a conference and in looking over my reviews was reminded of the following comment by a senior colleague to me at ICML this year. He has been program chair for several conferences and said that he thought that RL reviewers were the hardest on their own community that he had ever seen. I thought back to my turns as senior PC or area chair at ML conferences and realized that I had felt the same way in those cases.
So, assuming you agree with the supposition, why is it the case? In looking back at years of reviewing, I think the reasons are:
1. There is a large subset of the RL community that is at best skeptical of papers that don't do "real applications". Certainly there is good reason to be hard on shoddy empirical work or claims. Nevertheless this subset goes too far perhaps?
2. Assuming that a reviewer is willing to accept simulated domains, in the absence of widely accepted benchmarks there is no agreement to a standard suite of problems and so different reviewers set their own standards of acceptable empirical test sets. Many reviewers reject "gridworld"-types of tasks for example.
3. There is also a healthy skepticism of theoretical results in a section of the RL community. And indeed, some theory while quite possibly true has little significance. But again, perhaps these papers are treated too harshly?
4. Perhaps the most important reason however is that a significant part of RL research is focused on issues other that strict performance. Arguably, the focus of machine learning on benchmarks and performance is misplaced for RL? This is to say that while some part of the RL community needs to focus on engineering and performance, others should be allowed and even encouraged to explore the more difficult to quantify AI issues.
Of course it is entirely possible that we are not producing enough good papers as a community and thus being harsh is the right thing. I don't believe this :)
(In a later post, I will make specific suggestions to address the issues above)