news    articles    opinions    tutorials    concepts    |    about    contribute     republish
by   -   December 8, 2019

The 33rd annual Conference on Neural Information Processing Systems (NeurIPS), happening this week in Vancouver, brings together more than 10k researchers and practitioners from all fields engaged in fundamental work in Machine Learning and Artificial Intelligence.

by   -   December 7, 2019

That’s right! You better not run, you better not hide, you better watch out for brand new AI-themed holiday material on AIhub!

by   -   December 7, 2019

By Sudeep Dasari

This post is cross-listed at the SAIL Blog and the CMU ML blog.

In the last decade, we’ve seen learning-based systems provide transformative solutions for a wide range of perception and reasoning problems, from recognizing objects in images to recognizing and translating human speech. Recent progress in deep reinforcement learning (i.e. integrating deep neural networks into reinforcement learning systems) suggests that the same kind of success could be realized in automated decision making domains. If fruitful, this line of work could allow learning-based systems to tackle active control tasks, such as robotics and autonomous driving, alongside the passive perception tasks to which they have already been successfully applied.

by and   -   November 21, 2019

AI Policy Matters is a regular column in AI Matters featuring summaries and commentary based on postings that appear twice a month in the AI Matters blog.

by   -   November 21, 2019


Michael Littman and Dave Ackley revisit the meaning of life (the subject of the second Computing Up conversation) in the context of politics and society and human destiny.

by   -   November 7, 2019


We have collected some of the month’s most interesting tweets about AI.

by   -   November 6, 2019

By David Gaddy

When learning to follow natural language instructions, neural networks tend to be very data hungry – they require a huge number of examples pairing language with actions in order to learn effectively. This post is about reducing those heavy data requirements by first watching actions in the environment before moving on to learning from language data. Inspired by the idea that it is easier to map language to meanings that have already been formed, we introduce a semi-supervised approach that aims to separate the formation of abstractions from the learning of language.

by   -   October 3, 2019

Every month, we gather some of the most interesting tweets capturing latest results, debates, and events.

by   -   October 3, 2019

Inspired by a WIRED profile of Karl Friston, Michael Littman and Dave Ackley talk about theories of everything, and theories thereof.

by   -   October 3, 2019

By Anusha Nagabandi

Dexterous manipulation with multi-fingered hands is a grand challenge in robotics: the versatility of the human hand is as yet unrivaled by the capabilities of robotic systems, and bridging this gap will enable more general and capable robots. Although some real-world tasks (like picking up a television remote or a screwdriver) can be accomplished with simple parallel jaw grippers, there are countless tasks (like functionally using the remote to change the channel or using the screwdriver to screw in a nail) in which dexterity enabled by redundant degrees of freedom is critical. In fact, dexterous manipulation is defined as being object-centric, with the goal of controlling object movement through precise control of forces and motions — something that is not possible without the ability to simultaneously impact the object from multiple directions. For example, using only two fingers to attempt common tasks such as opening the lid of a jar or hitting a nail with a hammer would quickly encounter the challenges of slippage, complex contact forces, and underactuation. Although dexterous multi-fingered hands can indeed enable flexibility and success of a wide range of manipulation skills, many of these more complex behaviors are also notoriously difficult to control: They require finely balancing contact forces, breaking and reestablishing contacts repeatedly, and maintaining control of unactuated objects. Success in such settings requires a sufficiently dexterous hand, as well as an intelligent policy that can endow such a hand with the appropriate control strategy. We study precisely this in our work on Deep Dynamics Models for Learning Dexterous Manipulation.

by   -   August 16, 2019


#IJCAI2019 ended today. Besides talks, panel discussions, and presentations; the winners of this year’s prestigious IJCAI awards shared their opinions about relevant future directions in the field.

by   -   August 14, 2019

By Nicholas Carlini

It is important whenever designing new technologies to ask “how will this affect people’s privacy?” This topic is especially important with regard to machine learning, where machine learning models are often trained on sensitive user data and then released to the public. For example, in the last few years we have seen models trained on users’ private emails, text messages, and medical records.

This article covers two aspects of our upcoming USENIX Security paper that investigates to what extent neural networks memorize rare and unique aspects of their training data.

Specifically, we quantitatively study to what extent following problem actually occurs in practice:

by   -   August 14, 2019

Meet Claus Aranha, Assistant Professor at the University of Tsukuba, Center of Artificial Intelligence Research (C-AIR).

by   -   August 14, 2019

Like yesterday, we bring you the best tweets covering major talks and events at IJCAI 2019.

by   -   August 13, 2019

The team behind the Libratus program were today announced as the latest recipients of the Marvin Minsky Medal, given by the IJCAI organisation for Outstanding Achievements in AI. Libratus made headlines in January 2017 when it beat a team of champion human poker players in a 20-day no-limit tournament.


supported by: