Reviewing is Underway!

--

Hsuan-Tien Lin, Maria Florina Balcan, Raia Hadsell and Marc’Aurelio Razato

NeurIPS 2020 Program Chairs

With an increase of 38% more submissions over 2019, it has been another record-breaking year for NeurIPS submissions, showing that AI research is thriving and our machine learning community is still expanding. There were 12115 abstracts submitted, which led to 9467 full submissions one week later. After 184 were withdrawn by authors or rejected for major violations such as being non-anonymous or exceeding the maximum page count, the remaining papers were assigned to Area Chairs and Senior Area Chairs. These individuals did a light read of the papers, suggested appropriate reviewers, and identified any papers that they were confident would not be accepted. These papers, about 11% of the total, have now been summarily rejected and will not be reviewed. The remaining 8186 papers have now been assigned and should be getting the scrupulous and discerning attention that we expect from NeurIPS reviewers!

Submission

The submission process to NeurIPS this year was composed of three deadlines, spaced out over two weeks: one for the abstract, one for the full manuscript and complete author list, and one for supplementary material. Happily, peer-reviewing platform, CMT, was extremely stable during each deadline, and almost no technical difficulties were reported by submitting authors. That’s not to say that there were no hiccups, of course. The requirement for co-authors to register on CMT, the requirement to include a broader impact section, and the requirement to upload supplementary material separately all yielded a number of questions from authors. However, all in all, the submission process proceeded smoothly.

The focus of work covered by this years submissions being considered was very broad. Papers in the areas of Algorithms (29% of papers under review), Deep learning (19%), and Applications (18%) comprised the majority of submissions, with Reinforcement learning and planning (9%), Theory (7%), Probabilistic methods (5%), Social aspects of machine learning (5%), Optimization (5%), Neuroscience and cognitive science (3%), and Data, challenges, implementations, and software (1%) making up the remainder. Compared to 2019, we observe a slight decrease in Deep learning and Applications (both decreased by 2%), owhile Social aspects of machine learning increased (by 3%) .

Finalizing the reviewer pool and assigning papers

Assembling a sufficient number of qualified reviewers whose expertise is well-matched to the submitted paper topics is one of the most important duties of the program chairs, and it becomes more daunting with each passing year. This year we issued an open invitation for reviewer self-nominations and recommendations, and we also recruited directly from authors and co-authors of the submitted papers. For both categories, we gathered enough information (publication history, reviewing experience, subject area) to be able to exclude those that we felt were not yet qualified. In the end, we put together a reviewer list with 7800 qualified people, 2400 of whom were from this year’s submitting authors. While this appears to give us a very large total reviewing capacity, note that many reviewers set low quotas of 2 or 3 papers, and many other reviewers only have expertise in a single subject area such as health care or NLP, constraining the assignment process.

Assigning each paper to three reviewers and one area chair (AC) is challenging, as it requires optimizing for high-quality assignments (subject area match, publication similarity score) and reviewer happiness (bidding preferences) while not violating constraints (reviewer and AC quotas, conflicts of interest). Optimally solving the resulting mixed integer programming problem is straightforward, though computationally expensive. But, of course, the real-world quality of the “optimal” solution depends on the accuracy of the data. Therefore, we worked hard to ensure that everyone had up-to-date accounts on Toronto Paper Matching System (TPMS) and OpenReview , a complete account of their conflict-of-interest information, and sufficient bids on papers. As a consequence, we managed to automatically assign papers to reviewers with an average affinity score (mixed from TPMS and OpenReview metrics) of 0.74, which is nearly 10% higher than in 2019.

Summary Rejections

In the past three weeks, we have tasked our area chairs and senior area chairs (SAC) with the job of summary rejection: ACs were responsible for identifying papers that are likely to be rejected, and SACs cross checked the selections. Thus, each paper that was summarily rejected was read by two expert reviewers. To alleviate bias, neither the AC nor the SAC could see the authors’ identity at this point. As we stated in the rejection emails, we wish that every paper could be fully reviewed. However, the growth of the field has made it difficult to do so, and we have chosen to explore the use of summary rejections to limit reviewer load.

Overall, ACs and SACs did a fantastic job. They evaluated over 9000 submissions in under three weeks, which is truly remarkable. ACs and SACs provided authors with a set of standardized reasons for rejection (e.g., lack of clarity, out-of-scope, etc.). In addition, more than half of the rejected submissions received also brief ad hoc feedback further motivating the reasons for rejection. Of course, given the time and energy they put into their submissions, all authors may have wished to receive ad hoc feedback, if not a full review. However, we had to strike a balance between the legitimate desire of authors to receive full feedback and our desire to scale the review process while limiting the burden on our ACs and SACs who had to evaluate so many papers in so little time.

There are clearly pros and cons to summary rejection, and there are lots of things that could be improved in the future. A positive side effect of the summary rejection phase, besides reducing reviewers load, is also that ACs and SACs are now more familiar with the submissions assigned to them, which will inform how they lead their subsequent discussions with reviewers. The downside of summary rejection is that some authors of rejected papers have been left with little constructive feedback to improve their submissions.

We will share a more in depth analysis and make more conclusive remarks once the review period is over.

Next steps

We have started the regular review process and, looking forward, we expect to be keeping to schedule, notifying authors with their reviews by August 7, 2020. This phase will be followed by a discussion period among the program committee members that will lead to initial recommendations by area chairs at the end of August. In September, the process will conclude with a calibration phase. There will be a calibration within each subject area, and another one to review the whole set of submissions across all areas. We expect to notify authors of the acceptance decision by the end of September.

In our next blog post, we will announce the list of invited speakers that we will have at NeurIPS 2020. The invited speakers are a major component of the program of the conference and we are very excited by the speakers we’ve lined up. Stay tuned…

Have a fun summer and stay safe and healthy!

--

--

Neural Information Processing Systems Conference

Tweets sent to this account are not actively monitored. To contact us please go to http://neurips.cc/Help/Contact