news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   October 23, 2020

IROS2020 | AIhub

This Sunday sees the start of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). This year the event is online and free for anyone to attend. Content will be available from the platform on demand, with access available from 25 October to 25 November 2020.

by   -   October 22, 2020

michael madaio
Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Michael Madaio about practices for co-designing ethical technologies.

by   -   October 21, 2020

AIhub | space station view

On 4 September 2020 an online conference on the topic of space and artificial intelligence took place. The event was organised by CLAIRE and the European Space Agency (ESA) in association with ECAI2020. The program included five keynote talks, a panel discussion, and 17 contributed presentations on topics concerning different AI methods and different areas of space technology, including space operations and earth observation.

by   -   October 20, 2020

AIhub | Tweets round-up

Here we bring you a selection of popular tweets about AI from the last couple of months.

by   -   October 19, 2020

By Ashvin Nair and Abhishek Gupta

Robots trained with reinforcement learning (RL) have the potential to be used across a huge variety of challenging real world problems. To apply RL to a new problem, you typically set up the environment, define a reward function, and train the robot to solve the task by allowing it to explore the new environment from scratch. While this may eventually work, these “online” RL methods are data hungry and repeating this data inefficient process for every new problem makes it difficult to apply online RL to real world robotics problems.

by   -   October 16, 2020
We propose a method for making black-box functions provably robust to input manipulations. By training an ensemble of classifiers on randomly flipped training labels, we can use results from randomized smoothing to certify our classifier against label-flipping attacks—the larger the margin, the larger the certified radius of robustness. Using other types of noise allows for certifying robustness to other data poisoning attacks.

By Elan Rosenfeld

Adversarial examples—targeted, human-imperceptible modifications to a test input that cause a deep network to fail catastrophically—have taken the machine learning community by storm, with a large body of literature dedicated to understanding and preventing this phenomenon (see these surveys). Understanding why deep networks consistently make these mistakes and how to fix them is one way researchers hope to make progress towards more robust artificial intelligence.

by   -   October 15, 2020
Researchers test a prototype of a new diabetes device for prick-free glucose monitoring.

New technology can quickly and accurately monitor glucose levels in people with diabetes without painful finger pricks to draw blood. A palm-sized device developed by researchers at the University of Waterloo uses radar and artificial intelligence (AI) to non-invasively read blood inside the human body.

by   -   October 14, 2020

AI seminars

Here you can find a list of the AI-related seminars that are scheduled to take place between now and the end of November 2020. We’ve also listed recent past seminars that are available for you to watch. All events detailed here are free and open for anyone to attend virtually.

by   -   October 13, 2020


Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Anima Anandkumar about democratizing AI.

by   -   October 12, 2020

By Jonathan O’Callaghan

On 18 September, the European Commission published an independent expert report that looks at some of the outstanding safety and ethical issues around connected and automated vehicles (CAVs).

We spoke to three experts involved in the report about what steps they think still need to be taken to make CAVs safe, what challenges still need to be overcome, and how we can prepare for a future in which both computer-driven and human-driven cars are on our roads.

by   -   October 9, 2020

AI-Europe event

An online event organised by the Royal Society, the Centre National de la Recherche Scientifique (CNRS) and the Max Planck Society took place on 7 October. It focussed on AI in Europe and considered topics such as European collaboration, trustworthy AI and the role of regulation.

by   -   October 8, 2020
Copyright ©University of Cambridge

The COVID-19 pandemic is the greatest global healthcare crisis of our generation, presenting enormous challenges to medical research, including clinical trials. Advances in machine learning are providing an opportunity to adapt clinical trials and lay the groundwork for smarter, faster and more flexible clinical trials in the future.

by   -   October 7, 2020

AIhub arXiv roundup

What’s hot on arXiv? Here are the most tweeted papers that were uploaded onto arXiv during September 2020.

Results are powered by Arxiv Sanity Preserver.

by   -   October 6, 2020


Hosted by Dylan Doyle-Burke and Jessie J Smith, Radical AI is a podcast featuring the voices of the future in the field of artificial intelligence ethics. In this episode Jess and Dylan chat to Jenn Wortman Vaughan about building responsible AI.

by   -   October 5, 2020

By Sophia Stiles

Editor’s Note: The following blog is a special guest post by a recent graduate of Berkeley BAIR’s AI4ALL summer program for high school students.

AI4ALL is a nonprofit dedicated to increasing diversity and inclusion in AI education, research, development, and policy.

The idea for AI4ALL began in early 2015 with Prof. Olga Russakovsky, then a Stanford University Ph.D. student, AI researcher Prof. Fei-Fei Li, and Rick Sommer – Executive Director of Stanford Pre-Collegiate Studies. They founded SAILORS as a summer outreach program for high school girls to learn about human-centered AI, which later became AI4ALL. In 2016, Prof. Anca Dragan started the Berkeley/BAIR AI4ALL camp, geared towards high school students from underserved communities.


supported by: