news    articles    opinions    tutorials    concepts    |    about    contribute     republish

articles

by   -   August 7, 2020

ICML
The third and final ICML2020 invited talk covered the topic of quantum machine learning (QML) and was given by Iordanis Kerenidis. He took us on a tour of the quantum world, detailing the tools needed for quantum machine learning, some of the first applications, and challenges faced by the field.

by   -   August 5, 2020

The success of deep learning over the last decade, particularly in computer vision, has depended greatly on large training data sets. Even though progress in this area boosted the performance of many tasks such as object detection, recognition, and segmentation, the main bottleneck for future improvement is more labeled data. Self-supervised learning is among the best alternatives for learning useful representations from the data. In this article, we will briefly review the self-supervised learning methods in the literature and discuss the findings of a recent self-supervised learning paper from ICLR 2020 [14].

by   -   August 4, 2020
domain boundaries
Figure 1: Domains boundaries are rarely clear. Therefore, it is hard to set up definite domain descriptions for all possible domains.

By Zhongqi Miao and Ziwei Liu

The World is Continuously Varying

Imagine we want to train a self-driving car in New York so that we can take it all the way to Seattle without tediously driving it for over 48 hours. We hope our car can handle all kinds of environments on the trip and send us safely to the destination. We know that road conditions and views can be very different. It is intuitive to simply collect road data of this trip, let the car learn from every possible condition, and hope it becomes the perfect self-driving car for our New York to Seattle trip. It needs to understand the traffic and skyscrapers in big cities like New York and Chicago, more unpredictable weather in Seattle, mountains and forests in Montana, and all kinds of country views, farmlands, animals, etc. However, how much data is enough? How many cities should we collect data from? How many weather conditions should we consider? We never know, and these questions never stop.

by   -   August 3, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during July 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   July 31, 2020

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

by   -   July 28, 2020

ICML
There were three invited talks at this year’s virtual ICML. The first was given by Lester Mackey, and he highlighted some of his efforts to do some good with machine learning. During the talk he also outlined several ways in which social good efforts can be organised, and described numerous social good problems that would benefit from the community’s attention.

by   -   July 27, 2020
DARTS
Figure 1: How DARTS and other weight-sharing methods replace the discrete assignment of one of four operations o\in O to an edge e with a \theta-weighted combination of their outputs. At each edge e in the network, the value at input node e_{in} is passed to each operation in O={1:Conv 3×3,  2:Conv 5×5, 3:Pool 3×3, 4:Skip Connect}; the value at output node e_{out} will then be the sum of the operation outputs weighted by parameters \theta_{e,o}\in[0,1] that satisfy \sum_{o\in O}\theta_{e,o}=1.

By Misha Khodak and Liam Li

Neural architecture search (NAS) — selecting which neural model to use for your learning problem — is a promising but computationally expensive direction for automating and democratizing machine learning. The weight-sharing method, whose initial success at dramatically accelerating NAS surprised many in the field, has come under scrutiny due to its poor performance as a surrogate for full model-training (a miscorrelation problem known as rank disorder) and inconsistent results on recent benchmarks. In this post, we give a quick overview of weight-sharing and argue in favor of its continued use for NAS.

by   -   July 24, 2020
Clear Light Bulb Planter on Grey Rock
Clear Light Bulb Planter on Grey Rock. Photographer: Singkham

By Roger Taylor, Chair of the Centre for Data Ethics and Innovation

Failure to use data effectively means we cannot deal with the most pressing issues that face us today, such as discrimination. Addressing this requires institutions that are fit to enable responsible use of data and technology for the public good, engaging civil society and the public as well as industry and government.

by   -   July 20, 2020
OmniTact

Human thumb next to our OmniTact sensor, and a US penny for scale.

By Akhil Padmanabha and Frederik Ebert

Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).

by   -   July 9, 2020
Time Lapse Photography of Blue Lights. Photographer: Pixabay

By Carly Kind, Director of the Ada Lovelace Institute

Public scrutiny is critical for trust in, and democratic legitimacy for, the use of data-driven decision-making and algorithmic systems in our society.

We stand at the intersection of monumental and ongoing ruptures that will transform the data governance landscape. If they are to have a positive long-term influence it will be because we have heeded their lessons.




supported by: