ICRL, the International Conference on Learning Representations, was held May 6th to 9th 2019 in New Orleans.
Relive the conference through some of the top tweets (#ICLR2019).
Cynthia Dwork @ #ICLR2019: unfair/biased algorithms are just one component of an unfair world. It is not only about the algorithm, but also about the way it is deployed and the data it is fed with.
— José Oramas M. (@jaom7) May 6, 2019
Can't believe @iclr2019 has already come to end! What an amazing event and it was great connecting with so many researchers! My Favorite Talk award goes to @emilyshuckburgh and her phenomenal keynote on AI for Planetary Good! pic.twitter.com/5UvueOWl4r
— Ashley Pilipiszyn (@apilipis) May 9, 2019
@pyoudeyer talking about autonomous learning! #ICLR2019 #keynote pic.twitter.com/kSzyGjK6ae
— Cătălina Cangea (@catalinacangea) May 8, 2019
@pyoudeyer talking about autonomous learning! #ICLR2019 #keynote pic.twitter.com/kSzyGjK6ae
— Cătălina Cangea (@catalinacangea) May 8, 2019
How do people learn so much from so little? Noah Goodman talks about concept learning at #ICLR2019. ? pic.twitter.com/UPudgjbPyV
— Numa ??? (@NumaDhamani) May 9, 2019
Fascinating talk by Mirella Lapata on learning language interfaces with neural models at #ICLR2019 - a great overview of interesting tricks that can be extended to many NLP applications! pic.twitter.com/mulAhtkeO6
— Akshay Budhkar (@BudhkarAkshay) May 9, 2019
Important talk on Adversarial ML by @goodfellow_ian #iclr2019 pic.twitter.com/JOfxgA8eC2
— Luca Rigazio (@gigazio) May 7, 2019
For today's issue of The Algorithm, I summarized Léon Bottou's talk yesterday at #ICLR2019. To a packed room he laid out a framework for how we might use deep learning to understand causation, not just correlation. Feedback welcome on my interpretation! https://t.co/X3OCeNPEyW
— Karen Hao (@_KarenHao) May 7, 2019
Organize, insist on having a voice, and build alternative ways (to ensure the ML technologies are not misused). Great thought provoking talk by Zeynep Tufekci #iclr2019 pic.twitter.com/f9Nxk4hJoB
— Alice Oh (@aliceoh) May 8, 2019
Congratulations to the two ICLR 2019 Best Paper winners!
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (arXiV)
Jonathan Frankle · Michael Carbin
Abstract - Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Best Paper Award 1: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle · Michael Carbin pic.twitter.com/SWVZuekobQ— ICLR 2019 (@iclr2019) May 6, 2019
Summary in MIT Tech Review.
This week I dived into the fascinating best paper winner from #ICLR2019. It found that within every neural network exists a much tinier one that can be trained to reach the same performance. In other words, we’ve been wasting processing power all along! https://t.co/X6tyEu2seN
— Karen Hao (@_KarenHao) May 11, 2019
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks (arXiv)
Yikang Shen · Shawn Tan · Alessandro Sordoni · Aaron Courville
Abstract - Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such an inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
ICLR Best Paper 2: Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Yikang Shen · Shawn Tan · Alessandro Sordoni · Aaron Courville pic.twitter.com/BiX4rWc1Ol— ICLR 2019 (@iclr2019) May 9, 2019
And a summary tweet from Microsoft with an accessible blog post.
We're excited to announce that Yikang Shen, Shawn Tan, Alessandro Sordoni @murefil and Aaron Courville received the Best Paper Award at @ICLR2019. Discover their work on Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks: https://t.co/pOWER2s4TY #ICLR2019 pic.twitter.com/5KijcIQSDZ
— Microsoft Research (@MSFTResearch) May 6, 2019
Fun end to the day at the #ICLR2019 Latinx and @black_in_ai joint workshop. I met the organisers of the #IndabaXBurundi @buildwithcycy @nijfranck and also @onucharlesc and so many others. ?? pic.twitter.com/mY1vzsUaDR
— Shakir Mohamed (@shakir_za) May 7, 2019
Parisa Kordjamshidi, our first invited speaker, talking about doing machine learning research at @Tulane and life in New Orleans! #wiml dinner #iclr2019 pic.twitter.com/Lz46DkLKMm
— WiML (@WiMLworkshop) May 7, 2019
38 presentations can be watched here.
You can livestream our @iclr2019 workshop on deep generative models for structured data using this link https://t.co/OHxPveJYpG it starts at 3:15 (in 45 minutes)!
— Adji Bousso Dieng (@adjiboussodieng) May 6, 2019
As well as debates.
Leslie Kaelbling leads the discussion with Doina Precup, Jeff Clune, Josh Tenenbaum, and Suchi Saria. (Go to https://t.co/bVmRKNciGC, use code #iclrdebate to submit/upvote a question.)
— ICLR 2019 (@iclr2019) May 6, 2019
And here are a couple researchers putting their slides online.
Here's the slides for my "Reproducibility in Machine Learning" talk at #ICLR2019https://t.co/OGsuro33Qq
— Joel Grus ♥️ ? (@joelgrus) May 6, 2019
Deep Generative Models for Graphs: Methods & Applications.
Slides from my talk at #ICLR2019 workshop on Representation Learning on Graphs and Manifolds.https://t.co/FKmjFowshH pic.twitter.com/QgjmwOJ8ms— Jure Leskovec (@jure) May 6, 2019
Top trends I saw at #ICLR2019 include the rise of unsupervised representation learning, RNN losing its luster, GANs still dominating, RL moving towards meta-learning, the return of old school ideas. Great conference for not only ideas but also motivation https://t.co/gmqbnfXgly
— Chip Huyen (@chipro) May 13, 2019
My first ICLR was a blast! Notes for @iclr2019 available here: https://t.co/mXT2FYm59Z #ICLR2019 pic.twitter.com/BH0W80w8B4
— David Abel (@dabelcs) May 10, 2019
Actionable suggestions by @OpenAI’s @jackclarkSF at the #AIforSocialGood workshop at #ICLR2019 pic.twitter.com/QC83AoEKXB
— Michela Paganini (@WonderMicky) May 6, 2019
??♂️?♀️ pic.twitter.com/AmyNx4jeAJ
— ICLR 2019 (@iclr2019) May 9, 2019
Looks like a good PR move for their paper on the wizard of Wikipedia.
I believe it's a funny reference to their ICLR paper ? (https://t.co/ULqGz1FwrI) pic.twitter.com/0qbfyGmNRP
— Emanuele Ballarin (@emaballarin) May 10, 2019
Yoshua Bengio closing remarks with the announcement ICLR2020 will be held in Addis Ababa, Ethiopia, bringing the international AI community to Africa. What a great forward looking inclusive choice. Awesome! #iclr2019 pic.twitter.com/TnyUQLNbSY
— Luca Rigazio (@gigazio) May 9, 2019