Introducing the NeurIPS 2021 Paper Checklist
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan
NeurIPS 2021 Program Chairs
Welcome! This is the first in a series of blog posts that will take you behind the scenes to explore the organization, review process, and program for NeurIPS 2021.
As Program Chairs, we have spent the past few months immersing ourselves in conference planning. We’ve begun recruiting a program committee (please say yes if we reach out to you!), finalizing the details of the call for papers and review process (more on that in our next blog post…), planning a stellar line-up of invited speakers, and engaging with the broader NeurIPS community to understand what we can do to make this year’s conference stronger than ever. We are thrilled and honored to have this opportunity to serve our community and are excited about the year ahead.
And what an exciting time it is! Machine learning impacts nearly every aspect of our day-to-day lives, from the news we see and movies we watch to our healthcare and education, all the way to whether we are offered a job or given a loan. Submissions to NeurIPS have been growing at a rate of roughly 40% per year for the last five years, with over 9,400 full paper submissions in 2020. With this rapid growth and endless opportunity for impact, it is increasingly important that we as a field continually revisit and examine our norms, our values, and the effect that we want our research to have on the world.
In 2019, NeurIPS introduced a reproducibility program, consisting of a code submission policy, a community-wide reproducibility challenge, and the inclusion of a reproducibility checklist as part of the paper submission process. Last year, NeurIPS took another important step, introducing the inclusion of broader impact statements in submissions along with a new ethics review process. We were thrilled to see these advances made and believe they represent a huge step forward for the community.
NeurIPS has a long history of experimentation. In that tradition, we wondered whether there might be a way to build on innovations like the reproducibility checklist and broader impact statements, but expand the scope to include other facets of responsible machine learning research and increase integration with the paper-writing process. We read the author feedback from the NeurIPS 2020 survey, listened to the thoughtful perspectives presented at NeurIPS 2020 broader impacts workshop, explored similar efforts taking place in other communities, and talked with researchers both within and outside the NeurIPS community who have thought long and hard about these issues. It became clear that authors want both more guidance around how to perform machine learning research responsibly and more flexibility in how they discuss this in their papers.
Taking this feedback into account, we landed on the idea of the NeurIPS Paper Checklist. The NeurIPS Paper Checklist is designed to encourage best practices for responsible machine learning research, taking into consideration reproducibility, transparency, research ethics, and societal impact. Our goal is to encourage authors to think about, hopefully address, but at least document the completeness, soundness, limitations, and potential negative societal impact of their work. We want to place minimal burden on authors, giving authors flexibility in how they choose to address the items in the checklist, while providing structure and guidance to help authors be attentive to knowledge gaps and surface issues that they might not have otherwise considered.
Most questions in the checklist are framed in terms of transparency. For example, “Did you describe the limitations of your work?” or “Did you include the code, data, and instructions needed to reproduce the main experimental results?” A response of “yes” is generally preferable to a response of “no,” but it’s fine to say “no” in some cases — this is expected and not a grounds for rejection. Authors have the option of adding a short justification of each answer and a pointer to the relevant sections of their paper. While the questions are phrased in a binary way, there will of course be some gray areas, and we encourage authors to simply use their best judgement. Completing the checklist is required for all full paper submissions, but some questions are genuinely not applicable and can be marked “n/a” without much additional work.
In designing the checklist, one of our guiding principles was to increase integration with the paper-writing process and encourage authors to think through responsible research practices early on. Because of this, we decided to incorporate the checklist directly in the latex template included in the style files. For initial full paper submission, the questions and answers will show up in a standardized format at the end of the PDF, after the references. This will make it easier for authors to notice the checklist and prepare their answers while writing the paper, and will allow authors to link to particular sections of the paper directly in their checklist answers. It will also make it easier for reviewers to take the checklist into account. For accepted papers, authors are encouraged — though not required — to include the checklist as an appendix. The checklist itself will not count towards the page limit for either initial submissions or accepted papers.
Building on the broader impact statements required for NeurIPS 2020, the checklist prompts authors to reflect on the potential negative societal impact of their work. Examples might include potential malicious or unintended uses like disinformation or surveillance, environmental impact from training huge models, fairness considerations, privacy considerations, and security considerations. Whereas NeurIPS previously required a stand-alone broader impacts section, this year we are letting authors decide where to most naturally place a discussion of potential negative societal impacts in their paper. Whereas the broader impacts section previously did not count towards the page limit, it must now fit within the page limit. However, we have extended the page limit from 8 pages to 9 pages (with an additional page allowed after acceptance) and encourage authors to prioritize using this space to address societal impacts.
Reviewers will be given clear guidance on how they should take the checklist into account. This guidance will be explicit that it’s ok to answer no to some questions and that authors should be rewarded, rather than punished, for being up front about the limitations and potential negative societal impact of their work.
Separate from the checklist, ethics reviews will continue this year. During the review process, papers may be flagged for ethics review and sent to an ethics review committee for comments. These comments will be considered by the primary Reviewers and Area Chair as part of their deliberation and visible to authors, who will have an opportunity to respond. Ethics reviewers do not have the authority to reject papers, but in extreme cases papers may be rejected by the Program Chairs (that is, us) on ethical grounds. We are working with this year’s General Chair, Marc’Aurelio Ranzato, and a small committee of experts to create a set of ethics review criteria that will be made public in advance of the paper submission deadline.
The NeurIPS Paper Checklist and processes around it were developed with input from dozens of researchers in the NeurIPS community as well as experts in AI ethics and responsible machine learning. We took inspiration (and in some cases, exact wording) from the machine learning reproducibility checklist, responsible AI documentation efforts including datasheets for datasets and model cards, ACM’s guidance on reporting negative impacts, and guidelines from other conferences including the NAACL ethics review questions. We iterated extensively on the contents of the checklist and piloted both the questions and style file with community volunteers, aiming to balance simplicity with thoroughness. We are immensely grateful to everyone who provided feedback, and especially those who took the time to try out the checklist on their own research papers. Still, we acknowledge that we’re trying something new and it won’t be perfect — we hope that future Program Chairs will continue to improve and evolve the checklist in subsequent years. We are lucky to be part of a community that embraces experimentation as we believe this is the way to make progress.
Check out the NeurIPS Paper Checklist here!