news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   August 6, 2020

Timnit Gebru

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Timnit Gebru about “Racial Representation and Systemic Transformation”.

by   -   August 5, 2020

The success of deep learning over the last decade, particularly in computer vision, has depended greatly on large training data sets. Even though progress in this area boosted the performance of many tasks such as object detection, recognition, and segmentation, the main bottleneck for future improvement is more labeled data. Self-supervised learning is among the best alternatives for learning useful representations from the data. In this article, we will briefly review the self-supervised learning methods in the literature and discuss the findings of a recent self-supervised learning paper from ICLR 2020 [14].

by   -   August 4, 2020
domain boundaries
Figure 1: Domains boundaries are rarely clear. Therefore, it is hard to set up definite domain descriptions for all possible domains.

By Zhongqi Miao and Ziwei Liu

The World is Continuously Varying

Imagine we want to train a self-driving car in New York so that we can take it all the way to Seattle without tediously driving it for over 48 hours. We hope our car can handle all kinds of environments on the trip and send us safely to the destination. We know that road conditions and views can be very different. It is intuitive to simply collect road data of this trip, let the car learn from every possible condition, and hope it becomes the perfect self-driving car for our New York to Seattle trip. It needs to understand the traffic and skyscrapers in big cities like New York and Chicago, more unpredictable weather in Seattle, mountains and forests in Montana, and all kinds of country views, farmlands, animals, etc. However, how much data is enough? How many cities should we collect data from? How many weather conditions should we consider? We never know, and these questions never stop.

by   -   August 3, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during July 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   July 31, 2020

ICML
The second invited talk at ICML2020 was given by Brenna Argall. Her presentation covered the use of machine learning within the domain of assistive machines for rehabilitation. She described the efforts of her lab towards customising assistive autonomous machines so that users can decide the level of control they keep, and how much autonomy they hand over to the machine.

by   -   July 30, 2020

Eun Seo Jo

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Eun Seo Jo about “The History that Defines our Technological Future”.

by   -   July 29, 2020


Twitter users will have seen the proliferation of “I have a joke” tweets in their feed over the past few days. The AI community produced some gems so we’ve collected a selection here for your amusement.

by   -   July 28, 2020

ICML
There were three invited talks at this year’s virtual ICML. The first was given by Lester Mackey, and he highlighted some of his efforts to do some good with machine learning. During the talk he also outlined several ways in which social good efforts can be organised, and described numerous social good problems that would benefit from the community’s attention.

by   -   July 27, 2020
DARTS
Figure 1: How DARTS and other weight-sharing methods replace the discrete assignment of one of four operations o\in O to an edge e with a \theta-weighted combination of their outputs. At each edge e in the network, the value at input node e_{in} is passed to each operation in O={1:Conv 3×3,  2:Conv 5×5, 3:Pool 3×3, 4:Skip Connect}; the value at output node e_{out} will then be the sum of the operation outputs weighted by parameters \theta_{e,o}\in[0,1] that satisfy \sum_{o\in O}\theta_{e,o}=1.

By Misha Khodak and Liam Li

Neural architecture search (NAS) — selecting which neural model to use for your learning problem — is a promising but computationally expensive direction for automating and democratizing machine learning. The weight-sharing method, whose initial success at dramatically accelerating NAS surprised many in the field, has come under scrutiny due to its poor performance as a surrogate for full model-training (a miscorrelation problem known as rank disorder) and inconsistent results on recent benchmarks. In this post, we give a quick overview of weight-sharing and argue in favor of its continued use for NAS.

by   -   July 24, 2020
Clear Light Bulb Planter on Grey Rock
Clear Light Bulb Planter on Grey Rock. Photographer: Singkham

By Roger Taylor, Chair of the Centre for Data Ethics and Innovation

Failure to use data effectively means we cannot deal with the most pressing issues that face us today, such as discrimination. Addressing this requires institutions that are fit to enable responsible use of data and technology for the public good, engaging civil society and the public as well as industry and government.

by   -   July 23, 2020

AIhub coffee corner

The AIhub coffee corner captures the musings of AI experts over a 30-minute conversation. This month we discuss conferences and whether they will ever be the same again now we’ve had a taste of the virtual.

by   -   July 22, 2020
DeepCap
Qualitative results from DeepCap

Marc Habermann received an Honorable Mention in the Best Student Paper category at CVPR 2020 for work with Weipeng Xu, Michael Zollhöfer, Gerard Pons-Moll and Christian Theobalt on “DeepCap: Monocular Human Performance Capture Using Weak Supervision”. Here, Marc tells us more about their research, the main results of their paper, and plans for further improvements to their model.

by   -   July 21, 2020

Abeba Birhane

Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Abeba Birhane about “Robot Rights? Exploring Algorithmic Colonization”.

by   -   July 20, 2020
OmniTact

Human thumb next to our OmniTact sensor, and a US penny for scale.

By Akhil Padmanabha and Frederik Ebert

Touch has been shown to be important for dexterous manipulation in robotics. Recently, the GelSight sensor has caught significant interest for learning-based robotics due to its low cost and rich signal. For example, GelSight sensors have been used for learning inserting USB cables (Li et al, 2014), rolling a die (Tian et al. 2019) or grasping objects (Calandra et al. 2017).

by   -   July 17, 2020

ICML

There was lots going on at the virtual ICML conference this week. The event was bookended by tutorials and workshops, with the invited talks and poster sessions happening mid-week. There were also numerous opportunities to get involved in socials, and a chance to have a say on the format for future editions at the Town Hall meeting. Here is a selection of tweets from attendees and organisers.


supported by: