Announcing the NeurIPS 2020 award recipients

--

Hsuan-Tien Lin, Maria Florina Balcan, Raia Hadsell and Marc’Aurelio Ranzato

NeurIPS 2020 Program Chairs

In this blog post, we are excited to announce the various awards that are presented at NeurIPS 2020 and to share information about the selection processes for these awards.

NeurIPS 2020 Best Paper Awards

The winners of the NeurIPS 2020 Best Paper Awards are:

  • No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium by Andrea Celli (Politecnico di Milano), Alberto Marchesi (Politecnico di Milano), Gabriele Farina (Carnegie Mellon University), and Nicola Gatti (Politecnico di Milano). This paper will be presented on Tuesday, December 8th at 6:00 AM PST in the Learning Theory track.
  • Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method by Michal Derezinski (UC Berkeley), Rajiv Khanna (UC Berkeley), and Michael W. Mahoney (UC Berkeley). This paper will be presented on Wednesday, Dec 9th, at 6:00 PM PST in the Learning Theory track.
  • Language Models are Few-Shot Learners by Tom B. Brown (OpenAI), Benjamin Mann (OpenAI), Nick Ryder (OpenAI), Melanie Subbiah (OpenAI), Jared D. Kaplan (Johns Hopkins University), Prafulla Dhariwal (OpenAI), Arvind Neelakantan (OpenAI), Pranav Shyam (OpenAI), Girish Sastry (OpenAI), Amanda Askell (OpenAI), Sandhini Agarwal (OpenAI), Ariel Herbert-Voss (OpenAI), Gretchen M. Krueger (OpenAI), Tom Henighan (OpenAI), Rewon Child (OpenAI), Aditya Ramesh (OpenAI), Daniel Ziegler (OpenAI), Jeffrey Wu (OpenAI), Clemens Winter (OpenAI), Chris Hesse (OpenAI), Mark Chen (OpenAI), Eric Sigler (OpenAI), Mateusz Litwin (OpenAI), Scott Gray (OpenAI), Benjamin Chess (OpenAI), Jack Clark (OpenAI), Christopher Berner (OpenAI), Sam McCandlish (OpenAI), Alec Radford (OpenAI), Ilya Sutskever (OpenAI), and Dario Amodei (OpenAI). This paper will be presented on Monday December 7th at 6:00 PM PST in the Language/Audio Applications track.

Selection process. The NeurIPS 2020 best paper awards were selected by a committee that included Nicolò Cesa-Bianchi, Jennifer Dy, Surya Ganguli, Masashi Sugiyama, and Laurens van der Maaten, who shared with us the following details about the selection process.

In selecting winning papers, the committee used the following review criteria: Does the paper have the potential to endure? Does it provide new (and hopefully deep) insights? Is it creative and unexpected? Might it change the way people think in the future? Is it rigorous and elegant but does not over-claim its significance? Is it scientific and reproducible? Does it accurately describe the broader impact of the research?

To select the winners of the NeurIPS Best Paper Awards, the award committee went through a rigorous two-stage selection process:

  • In the first stage of the process, the 30 NeurIPS submissions with the highest review scores were read by two committee members. Committee members also read the corresponding paper reviews and rebuttal. Based on this investigation, the committee selected nine papers that stood out according to the reviewing criteria.
  • In the second stage of the process, all committee members read the nine papers on the shortlist and ranked them according to the review criteria. Next, the committee met virtually to discuss the highest-ranking papers and finalize the selection of award recipients.

In particular, the committee provided the following motivation for selecting three winning papers:

  • No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium. Correlated equilibria (CE) are easy to compute and can attain a social welfare that is much higher than that of the better-known Nash equilibria. In normal form games, a surprising feature of CE is that they can be found by simple and decentralized algorithms minimizing a specific notion of regret (the so-called internal regret). This paper shows the existence of such regret-minimizing algorithms that converge to CE in a much larger class of games: namely, the extensive-form (or tree-form) games. This result solves a long-standing open problem at the interface of game theory, computer science, and economics and can have substantial impact on games that involve a mediator, for example, on efficient traffic routing via navigation apps.
  • Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method. Selecting a small but representative subset of column vectors from a large matrix is a hard combinatorial problem, and a method based on cardinality-constrained determinantal point processes is known to give a practical approximate solution. This paper derives new upper and lower bounds for the approximation factor of the approximate solution over the best possible low-rank approximation, which can even capture the multiple-descent behavior with respect to the subset size. The paper further extends the analysis to obtaining guarantees for the Nyström method. Since these approximation techniques have been widely employed in machine learning, this paper is expected to have substantial impact and give new insight into, for example, kernel methods, feature selection, and the double-descent behavior of neural networks.
  • Language Models are Few-Shot Learners. Language models form the backbone of modern techniques for solving a range of problems in natural language processing. The paper shows that when such language models are scaled up to an unprecedented number of parameters, the language model itself can be used as a few-shot learner that achieves very competitive performance on many of these problems without any additional training. This is a very surprising result that is expected to have substantial impact in the field, and that is likely to withstand the test of time. In addition to the scientific contribution of the work, the paper also presents a very extensive and thoughtful exposition of the broader impact of the work, which may serve as an example to the NeurIPS community on how to think about the real-world impact of the research performed by the community.

Test of Time Award

We also continued the tradition of selecting a paper published about a decade ago at NeurIPS and that was deemed to have had a particularly significant and lasting impact on our community. We are delighted to announce that the winner of the NeurIPS 2020 test of time award is HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent published in NeurIPS 2011 and authored by Feng Niu, Benjamin Recht, Christopher Re, and Stephen Wright.

This paper was the first to show how to parallelize the ubiquitously used Stochastic Gradient Descent algorithm without any locking mechanism while achieving strong performance guarantees. At the time, several researchers proposed ways to parallelize SGD, but they all required memory locking and synchronization across the different workers. This paper proposed a simple strategy for sparse problems called Hogwild!: have each worker concurrently run SGD on a different subset of the data and perform fully asynchronous updates in the shared memory hosting the parameters of the model. Through both theory and experiments, they demonstrated that Hogwild! achieves a near linear speedup with the number of processors on data satisfying appropriate sparsity conditions.

You can find more about the paper and its impact by attending the Test of Time talk on Wednesday December 9th at 6:00 AM PST in the Optimization track.

Selection process. We identified a list of 12 papers published at NeurIPS about a decade ago (NeurIPS 2009, NeurIPS 2010, NeurIPS 2011). These were the papers from these NeurIPS editions with the highest numbers of citations since their publication. We also collected data about the recent citations counts for each of these papers by aggregating citations that these papers received in the past two years at NeurIPS, ICML and ICLR. We then asked the whole senior program committee (64 SACs) to vote on up to three of these papers to help us pick an impactful paper about which the whole senior program committee was enthusiastic.

Reviewer Awards

Finally, but equally importantly, we again selected reviewer award winners. We selected the top 10% of reviewers, that is 730 reviewers, to receive this award. We made the selection based on the average rating of reviews they entered in the system (where the ratings were provided by the area chairs). We thank all these reviewers for their outstanding work and as a small token of appreciation they were given free registration.

Congratulations to all awardees for their great research or service contribution to our thriving community!

--

--

Neural Information Processing Systems Conference
Neural Information Processing Systems Conference

Written by Neural Information Processing Systems Conference

Tweets sent to this account are not actively monitored. To contact us please go to http://neurips.cc/Help/Contact

No responses yet