ΑΙhub.org
 

Artificial intelligence agents begin to learn new skills from watching videos

by
29 June 2019



share this:

Whether it’s tying a tie, making slime, fixing a leaky faucet, or some other daily task, millions of people watch how-to videos to learn new skills. Now, artificially intelligent (AI) agents at Georgia Tech are watching videos to learn how to make sandwiches and complete other routine tasks.

According to Ashely Edwards, a recent computer science Ph.D. graduate, and lead author of the paper Imitating Latent Policies from Observation, there is a lot of data that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.

GT Computing alumna Ashley Edwards.

The new approach detailed in the paper uses imitation learning from observation to teach agents how to complete tasks like making a sandwich, playing a videogame, and even driving a car, all by watching videos. In most experiments, Edwards and her fellow researchers’ algorithm was able to complete a task in 200 to 300 steps, which Edwards said is a substantial improvement over existing methods.

 

“This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,” said Charles Isbell, dean designate of the College of Computing and paper co-author.

The new approach begins with an agent watching a video and guessing what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. Ultimately, once these agents are placed into an actual environment, they will be able to take what they have learned from watching videos and apply the knowledge to real-world actions.

In previous research using “imitation from observation,” humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.

“There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,” said Edwards.

For example, there are countless hours of dashcam footage from autonomous vehicles. The videos, however, rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.

During their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game, they performed tasks like getting a cart to balance a pole or teaching a mountain car to swing itself up to the top of a mountain. They also used classic control reinforcement learning problems in their experiments.

Their approach was able to beat the expert in two of the experiments and was considered “state-of-the-art” in all four.

Dean Designate Charles Isbell.

Despite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.

This research is one of 18 accepted papers from the Machine Learning Center at Georgia Tech’s (ML@GT) and was presented at the 36th Annual International Conference on Machine Learning (ICML) held June 9 through 15 in Long Beach, Calif.

IMLS, the board for ICML, is a financial supporter of AIhub. Neither ICML nor IMLS had editorial oversight of this article, and the opinions herein are not those of ICML or IMLS.




Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.
Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.




            AIhub is supported by:


Related posts :



Five ways you might already encounter AI in cities (and not realise it)

Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.
13 December 2024, by

#NeurIPS2024 social media round-up part 1

Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.
12 December 2024, by

Congratulations to the #NeurIPS2024 award winners

Find out who has been recognised by the conference awards.
11 December 2024, by

Multi-agent path finding in continuous environments

How can a group of agents minimise their journey length whilst avoiding collisions?
11 December 2024, by and

The Machine Ethics podcast: Diversity in the AI life-cycle with Caitlin Kraft-Buchman

In this episode Ben chats to Caitlin about gender and AI, using technology for good, lived experience expertise, co-creating technologies, international treaties on AI, and more...
11 December 2024, by

Call for AI-themed holiday videos, art, poems, and more

Send us your festive offerings!
06 December 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association