Tutorial Selections for NeurIPS 2020!
Danielle Belgrave, Microsoft Research Cambridge, UK
Tutorial Chairs, Organizing Committee NeurIPS 2020
Tutorials have long been an important part of the Neural Information Processing Systems (NeurIPS) conference. They give us a way of highlighting important developing areas with high impact and bringing the community up to speed in what is happening in these areas. We are pleased to announce our selection of tutorials to be presented this year. Like the rest of the conference, they will all be presented virtually.
Earlier this year, we sent out a call for proposals and also wrote a blog post highlighting some points we proposed to use for evaluating tutorial proposals. We received a record total of 50 submissions. We thank the authors of these submissions for their effort and enthusiasm to contribute to the NeurIPS Tutorials Program.
Our assessment task was challenging because of the large number of high quality, well-conceived submissions. Each proposal was independently assessed by three reviewers, which included the two chairs. As well as the intrinsic merit of each proposal, we took account of the balance of topics (there were several submissions on similar topics). We are grateful to Tristan Naumann who acted as a 3rd reviewer for the tutorials.
We selected a total of 9 tutorials from those submitted and invited 8 additional tutorials, giving us a total of 17 tutorials in the 2020 program.
This year’s tutorials are (in alphabetical order by title)
- Abstraction & Reasoning in AI systems: Modern Perspectives
(Francois Chollet (Google), Melanie Mitchell (Santa Fe Institute), Christian Szegedy (Google))
- Advances in Approximate Inference
(Yingzhen Li, Cheng Zhang (Microsoft Research))
- Beyond Accuracy: Grounding Evaluation Metrics for Human-Machine Learning Systems
(Praveen Chandar (Spotify), Fernando Diaz (Microsoft Research), Brian St. Thomas (Spotify))
- Deep Conversational AI
(Pascale Fung, Zhaojiang Lin andAndrea Madotto (Hong Kong University of Science and Technology))
- Deep Implicit Layers: Neural ODEs, Equilibrium Models, and Differentiable Optimization
(David Duvenaud (University of Toronto), Matt Johnson (Google), Zico Kolter (Carnegie Melon University))
- Designing Learning Dynamics
(Wojtek Czarnecki, Marta Garnelo and David Balduzzi (DeepMind))
- Equivariance and Covariance in Deep Neural Networks
(Taco Cohen (University of Amsterdam) and Risi Kondor (University of Chicago))
- Explaining Machine Learning Predictions: State-of-the-art, Challenges, and Opportunities
(Hima Lakkaraju (Harvard), Julius Adebayo (MIT), Sameer Singh (UC Irvine))
- Federated Learning and Analytics: Industry Meets Academia(Peter Kairouz (Google), Brendan McMahan (Google), and Virginia Smith (CMU))
- Machine Learning for Astrophysics and Astrophysics Problems for Machine Learning
(David W Hogg, Kate Storey-Fisher (New York University))
- Offline Reinforcement Learning From Algorithm Design to Practical Applications
(Sergey Levine and Aviral Kumar (University of Berkley))
- Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning
(Dustin Tran, Jasper Snoek, Balaji Lakshminarayanan (Google Brain))
- RL and Optimization
(Sham Kakade (University of Washington), Martha White (University of Alberta), Nicolas Le Roux (Google))
- Sketching and Streaming Algorithms(Jelani Nelson (University of California, Berkeley))
- The Beautiful Intertwining of Causal Inference, Experimental Design and Reinforcement Learning
(Susan Murphy (Harvard University))
- There and Back Again: A Tale of Slopes and Expectations
(Marc Deisenroth (University College London), Cheng Soon Ong (Data 61, Australian National University))
- Where Neuroscience meets AI (And What’s in Store for the Future)
(Jane Wang, Adam Marblestone, Kevin Miller (DeepMind))
We are looking forward to seeing all of you in December!
Thanks again to all those who went to considerable effort in preparing proposals for the program. Your commitment to the NeurIPS community is welcomed and admired.
Danielle and Bob