NeurIPS 2016

conferences
neurips
machine learning
ai
Author

Ed Henry

Published

December 12, 2016

Quick Summary

This year’s NIPS conference had record attendance at over 6000 people! As opposed to when I attended last year in Montreal, this was an, almost, two fold increase. That said, hats off to the organizers and all of the staff for being able to double the size of a conference and still have it be a relatively smooth attendance experience.

Speaking with other attendees though, I think there was a general interest in structuring the conference a bit more differently than the way it had been. There was even mention of breaking the Deep Learning portion of the conference off into it’s own conference in and of itself. That might be a bit extreme, however there was, without a doubt, a healthy amount of sessions and talks that were based on deep learning.

All said and done, though, it was an awesome experience that has left me charged to keep learning and trying the amazing ideas that were presented and exchanged throughout the conference. With that, I thought I’d post a list of all of the sessions I’d attended and try to provide a quick summary of what intuitions I’d built about the presentations. Keep in mind these notes are made both from memory and from the scribbles I have in my notebook. Come to think of it, there is actually a second thing I would have preferred, if it were possible. Sessions tended to be in the range of 20-30 minutes, and that never seemed to be enough time for individuals to present on their problems and potential progresses they had made. A lot of the problems that are being framed up can be incredible technical and may require half to three quarters of the allotted presentation time. I don’t pretend to have a solution to this problem, but rather am just interested in providing feedback should anyone stumble on it.

One thing that I wish that I could figure out how to do, in a meaningful way, is to contribute to the greater research “good” that the Open Science ideology that the machine learning community follows, outside of either a large(r) ML shop that can “afford” to pay someone to be half-reseacher, and outside of direct academia. There is the new effort AI-ON.org – so maybe that’s the answer?

Sessions Attended (and slides if I could find them)

Variational Inference

Variational Inference: Foundations and Modern Methods

Nuts and Bolts of Building Applications using Deep Learning

Generative Adversarial Networks

Predictive Learning

Value Iteration Networks (award talk)

Intelligent Biosphere

InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Value Iteration Networks

Synthesis of MCMC and Belief Propagation

Using Fast Weights to Attend to the Recent Past

Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences

Machine Learning and Likelihood-Free Inference in Particle Physics

This talk was really, really cool. But the time constraints didn’t allow Kyle to get into what I was interested in, and that was the embeddings work that had been done. I’m very interested in creating embeddings of tokens according to their co-occurence distribution(s).

Deep Learning without Poor Local Minima

Learning to Poke by Poking: Experiential Learning of Intuitive Physics

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Showing versus doing: Teaching by demonstration

Relevant sparse codes with variational information bottleneck

Symposia

Recurrent Neural Networks and Other Machines that Learn Algorithms

Deep Learning Symposium

Looking at this list after trying to recompile it from my notes, it’s both exhausting and intimidating to think about trying to rehash the workshops on top of everything else listed here. So I will keep this post to just the tutorials and oral talks for the time being. I will create another post that covers the workshops material, as well.