Announcing the NeurIPS 2020 award recipients

NeurIPS 2020 Best Paper Awards

  • No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium by Andrea Celli (Politecnico di Milano), Alberto Marchesi (Politecnico di Milano), Gabriele Farina (Carnegie Mellon University), and Nicola Gatti (Politecnico di Milano). This paper will be presented on Tuesday, December 8th at 6:00 AM PST in the Learning Theory track.
  • Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method by Michal Derezinski (UC Berkeley), Rajiv Khanna (UC Berkeley), and Michael W. Mahoney (UC Berkeley). This paper will be presented on Wednesday, Dec 9th, at 6:00 PM PST in the Learning Theory track.
  • Language Models are Few-Shot Learners by Tom B. Brown (OpenAI), Benjamin Mann (OpenAI), Nick Ryder (OpenAI), Melanie Subbiah (OpenAI), Jared D. Kaplan (Johns Hopkins University), Prafulla Dhariwal (OpenAI), Arvind Neelakantan (OpenAI), Pranav Shyam (OpenAI), Girish Sastry (OpenAI), Amanda Askell (OpenAI), Sandhini Agarwal (OpenAI), Ariel Herbert-Voss (OpenAI), Gretchen M. Krueger (OpenAI), Tom Henighan (OpenAI), Rewon Child (OpenAI), Aditya Ramesh (OpenAI), Daniel Ziegler (OpenAI), Jeffrey Wu (OpenAI), Clemens Winter (OpenAI), Chris Hesse (OpenAI), Mark Chen (OpenAI), Eric Sigler (OpenAI), Mateusz Litwin (OpenAI), Scott Gray (OpenAI), Benjamin Chess (OpenAI), Jack Clark (OpenAI), Christopher Berner (OpenAI), Sam McCandlish (OpenAI), Alec Radford (OpenAI), Ilya Sutskever (OpenAI), and Dario Amodei (OpenAI). This paper will be presented on Monday December 7th at 6:00 PM PST in the Language/Audio Applications track.
  • In the first stage of the process, the 30 NeurIPS submissions with the highest review scores were read by two committee members. Committee members also read the corresponding paper reviews and rebuttal. Based on this investigation, the committee selected nine papers that stood out according to the reviewing criteria.
  • In the second stage of the process, all committee members read the nine papers on the shortlist and ranked them according to the review criteria. Next, the committee met virtually to discuss the highest-ranking papers and finalize the selection of award recipients.
  • No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium. Correlated equilibria (CE) are easy to compute and can attain a social welfare that is much higher than that of the better-known Nash equilibria. In normal form games, a surprising feature of CE is that they can be found by simple and decentralized algorithms minimizing a specific notion of regret (the so-called internal regret). This paper shows the existence of such regret-minimizing algorithms that converge to CE in a much larger class of games: namely, the extensive-form (or tree-form) games. This result solves a long-standing open problem at the interface of game theory, computer science, and economics and can have substantial impact on games that involve a mediator, for example, on efficient traffic routing via navigation apps.
  • Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method. Selecting a small but representative subset of column vectors from a large matrix is a hard combinatorial problem, and a method based on cardinality-constrained determinantal point processes is known to give a practical approximate solution. This paper derives new upper and lower bounds for the approximation factor of the approximate solution over the best possible low-rank approximation, which can even capture the multiple-descent behavior with respect to the subset size. The paper further extends the analysis to obtaining guarantees for the Nyström method. Since these approximation techniques have been widely employed in machine learning, this paper is expected to have substantial impact and give new insight into, for example, kernel methods, feature selection, and the double-descent behavior of neural networks.
  • Language Models are Few-Shot Learners. Language models form the backbone of modern techniques for solving a range of problems in natural language processing. The paper shows that when such language models are scaled up to an unprecedented number of parameters, the language model itself can be used as a few-shot learner that achieves very competitive performance on many of these problems without any additional training. This is a very surprising result that is expected to have substantial impact in the field, and that is likely to withstand the test of time. In addition to the scientific contribution of the work, the paper also presents a very extensive and thoughtful exposition of the broader impact of the work, which may serve as an example to the NeurIPS community on how to think about the real-world impact of the research performed by the community.

Test of Time Award

Reviewer Awards

--

--

--

Tweets sent to this account are not actively monitored. To contact us please go to http://neurips.cc/Help/Contact

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Do you know-how big MNC’s like Google, Facebook is storing and manipulating petabytes of data…

Why Data Science Experts Should Adopt AutoML

Reviewing A/B Testing Course by Google on Udacity

How to Spot Good Data Driven Businesses

Microsoft Data Science Interview

5 tips for every Data Science project

Have a Stakeholder Meeting? — 5 Tips to Communicate Effectively as a Data Scientist

How to Build a Great Dashboard

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Neural Information Processing Systems Conference

Neural Information Processing Systems Conference

Tweets sent to this account are not actively monitored. To contact us please go to http://neurips.cc/Help/Contact

More from Medium

Fade The Public’ As A Sports Betting Strategy: Science Or Myth?

Efficient Neural Networks — an exploration

Initial Post HIEA 115

Top proposals for the Terra Eco-Renewal Program