news    articles    opinions    tutorials    concepts    |    about    contribute     republish

articles

by   -   October 19, 2020

By Ashvin Nair and Abhishek Gupta

Robots trained with reinforcement learning (RL) have the potential to be used across a huge variety of challenging real world problems. To apply RL to a new problem, you typically set up the environment, define a reward function, and train the robot to solve the task by allowing it to explore the new environment from scratch. While this may eventually work, these “online” RL methods are data hungry and repeating this data inefficient process for every new problem makes it difficult to apply online RL to real world robotics problems.

by   -   October 16, 2020
We propose a method for making black-box functions provably robust to input manipulations. By training an ensemble of classifiers on randomly flipped training labels, we can use results from randomized smoothing to certify our classifier against label-flipping attacks—the larger the margin, the larger the certified radius of robustness. Using other types of noise allows for certifying robustness to other data poisoning attacks.

By Elan Rosenfeld

Adversarial examples—targeted, human-imperceptible modifications to a test input that cause a deep network to fail catastrophically—have taken the machine learning community by storm, with a large body of literature dedicated to understanding and preventing this phenomenon (see these surveys). Understanding why deep networks consistently make these mistakes and how to fix them is one way researchers hope to make progress towards more robust artificial intelligence.

by   -   October 12, 2020

By Jonathan O’Callaghan

On 18 September, the European Commission published an independent expert report that looks at some of the outstanding safety and ethical issues around connected and automated vehicles (CAVs).

We spoke to three experts involved in the report about what steps they think still need to be taken to make CAVs safe, what challenges still need to be overcome, and how we can prepare for a future in which both computer-driven and human-driven cars are on our roads.

by   -   October 7, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during September 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   October 5, 2020

By Sophia Stiles

Editor’s Note: The following blog is a special guest post by a recent graduate of Berkeley BAIR’s AI4ALL summer program for high school students.

AI4ALL is a nonprofit dedicated to increasing diversity and inclusion in AI education, research, development, and policy.

The idea for AI4ALL began in early 2015 with Prof. Olga Russakovsky, then a Stanford University Ph.D. student, AI researcher Prof. Fei-Fei Li, and Rick Sommer – Executive Director of Stanford Pre-Collegiate Studies. They founded SAILORS as a summer outreach program for high school girls to learn about human-centered AI, which later became AI4ALL. In 2016, Prof. Anca Dragan started the Berkeley/BAIR AI4ALL camp, geared towards high school students from underserved communities.

by   -   October 1, 2020

Modelling extreme events in order to evaluate and mitigate their risk is a fundamental goal in many areas, including extreme weather events, financial crashes, and unexpectedly high demand for online services. In order to mitigate such risk it is vital to be able to generate a wide range of extreme, and realistic, scenarios. Researchers from the National University of Singapore and IIT Bombay have developed an approach to do just that.

by   -   September 30, 2020


This month saw the European Conference on AI (ECAI 2020) go digital. Included in the programme were five plenary talks. In this article we summarise the talk by Professor Carme Torras who gave an overview of her group’s work on assistive AI, and talked about the ethics of this field.

by   -   September 22, 2020

By Eliza Kosoy, Jasmine Collins and David Chan

Despite recent advances in artificial intelligence (AI) research, human children are still by far the best learners we know of, learning impressive skills like language and high-level reasoning from very little data. Children’s learning is supported by highly efficient, hypothesis-driven exploration: in fact, they explore so well that many machine learning researchers have been inspired to put videos like the one below in their talks to motivate research into exploration methods.

by   -   September 17, 2020

With this year’s International Conference on Machine Learning (ICML) being over, it is time to have another instalment of this series. Similar to last year’s post, I shall cover several papers that caught my attention because of their use of topological concepts—however, unlike last year, I shall not restrict the selection to papers using topological data analysis (TDA).

by   -   September 14, 2020


By Misha Laskin, Aravind Srinivas, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel

A remarkable characteristic of human intelligence is our ability to learn tasks quickly. Most humans can learn reasonably complex skills like tool-use and gameplay within just a few hours, and understand the basics after only a few attempts. This suggests that data-efficient learning may be a meaningful part of developing broader intelligence.




supported by: