news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   June 1, 2019

In March of this year I was lucky to travel to my first academic conference (thanks very much to EPSRC Rise and AAAI). Feeling a little bit nervous, extremely jet-lagged, and completely in awe of Stanford’s gorgeous campus, I attended the symposium on “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness”.

by   -   June 1, 2019

In this episode of Computing Up, Michael Littman and Dave Ackley discuss intersubjectivity.

by   -   May 27, 2019


OECD and partner countries formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI) today, agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.

by   -   May 27, 2019


By Marvin Zhang and Sharad Vikram

Imagine a robot trying to learn how to stack blocks and push objects using visual inputs from a camera feed. In order to minimize cost and safety concerns, we want our robot to learn these skills with minimal interaction time, but efficient learning from complex sensory inputs such as images is difficult. This work introduces SOLAR, a new model-based reinforcement learning (RL) method that can learn skills – including manipulation tasks on a real Sawyer robot arm – directly from visual inputs with under an hour of interaction. To our knowledge, SOLAR is the most efficient RL method for solving real world image-based robotics tasks.

by   -   May 16, 2019

“From the Archive” features historical content shining a light on past successes in AI.

This week we feature RoboCup highlights from 1997 to 2011.

by   -   May 15, 2019
Figure 1: Our model-based meta reinforcement learning algorithm enables a legged robot to adapt online in the face of an unexpected system malfunction (note the broken front right leg).

By Anusha Nagabandi and Ignasi Clavera

Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk. This adaptation is critical for functioning in the real world.

by   -   May 1, 2019

The Partnership on AI has announced an initiative to define best practices for transparency in machine learning.

by   -   May 1, 2019

In this episode of Computing Up, David Jensen, Professor and Director of the Knowledge Discovery Laboratory at the University of Massachusetts Amherst, talks with Michael Littman and Dave Ackley – because causality.

by   -   April 22, 2019
Black microphone stands on the desk. Interview, copy space.

By Marion Neumann
Welcome to the eighth interview in our series profiling senior AI researchers. This month we are especially happy to interview our SIGAI advisory board member, Thomas Dietterich, Director of Intelligent Systems at the Institute for Collaborative Robotics and Intelligence Systems (CoRIS) at Oregon State University.

by   -   April 22, 2019

By Annie Xie

In many animals, tool-use skills emerge from a combination of observational learning and experimentation. For example, by watching one another, chimpanzees can learn how to use twigs to “fish” for insects. Similarly, capuchin monkeys demonstrate the ability to wield sticks as sweeping tools to pull food closer to themselves. While one might wonder whether these are just illustrations of “monkey see, monkey do,” we believe these tool-use abilities indicate a greater level of intelligence.

by   -   April 11, 2019

What’s hot on Arxiv? Here are the most tweeted papers from the past month.

by   -   April 11, 2019

In this episode of Computing Up, Michael Littman and Dave Ackley discuss Rich Sutton’s “The Bitter Lesson” and Rod Brook’s “A Better Lesson“.