A guide to online scientific conferences — creating a virtual world
It was April of 2020 and in light of COVID-19 we were faced with a difficult decision. Do we go full steam ahead organizing our large international conference (ALife 2020) that we had been planning for the last two years in beautiful Montréal, Québec? Do we move the conference online? Do we cancel it altogether? These decisions are happening all over the world as event organizers weigh the costs and benefits of these three options.
Career advisor systems are essentially recommender systems in the space of job searching and career advice. They provide recommendations to candidates with possible career paths and to employers with possible candidates for a job opening. In this post we will outline the required capabilities of such systems, and highlight the challenges that need to be overcome in order to construct a working system. Questions like “What is a career advisor system?”, “What is it capable of doing?”, ”Why do we need them?” etc are answered in this article. We also discuss our recent work (presented at AAAI-IAAI ) which describes how we proposed to solve this problem.
In the last decade, one of the biggest drivers for success in machine learning has arguably been the rise of high-capacity models such as neural networks along with large datasets such as ImageNet to produce accurate models. While we have seen deep neural networks being applied to success in reinforcement learning (RL) in domains such as robotics, poker, board games, and team-based video games, a significant barrier to getting these methods working on real-world problems is the difficulty of large-scale online data collection.
Reinforcement learning (RL) is often touted as a promising approach for costly and risk-sensitive applications, yet practicing and learning in those domains directly is expensive. It costs time (e.g., OpenAI’s Dota2 project used 10,000 years of experience), it costs money (e.g., “inexpensive” robotic arms used in research typically cost 10,000 to 30,000 dollars), and it could even be dangerous to humans. How can an intelligent agent learn to solve tasks in environments in which it cannot practice?
The third and final ICML2020 invited talk covered the topic of quantum machine learning (QML) and was given by Iordanis Kerenidis. He took us on a tour of the quantum world, detailing the tools needed for quantum machine learning, some of the first applications, and challenges faced by the field.
The success of deep learning over the last decade, particularly in computer vision, has depended greatly on large training data sets. Even though progress in this area boosted the performance of many tasks such as object detection, recognition, and segmentation, the main bottleneck for future improvement is more labeled data. Self-supervised learning is among the best alternatives for learning useful representations from the data. In this article, we will briefly review the self-supervised learning methods in the literature and discuss the findings of a recent self-supervised learning paper from ICLR 2020 .
Imagine we want to train a self-driving car in New York so that we can take it all the way to Seattle without tediously driving it for over 48 hours. We hope our car can handle all kinds of environments on the trip and send us safely to the destination. We know that road conditions and views can be very different. It is intuitive to simply collect road data of this trip, let the car learn from every possible condition, and hope it becomes the perfect self-driving car for our New York to Seattle trip. It needs to understand the traffic and skyscrapers in big cities like New York and Chicago, more unpredictable weather in Seattle, mountains and forests in Montana, and all kinds of country views, farmlands, animals, etc. However, how much data is enough? How many cities should we collect data from? How many weather conditions should we consider? We never know, and these questions never stop.
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.
There were three invited talks at this year’s virtual ICML. The first was given by Lester Mackey, and he highlighted some of his efforts to do some good with machine learning. During the talk he also outlined several ways in which social good efforts can be organised, and described numerous social good problems that would benefit from the community’s attention.