ΑΙhub.org
 

Artificial intelligence agents begin to learn new skills from watching videos


by
29 June 2019



share this:

Whether it’s tying a tie, making slime, fixing a leaky faucet, or some other daily task, millions of people watch how-to videos to learn new skills. Now, artificially intelligent (AI) agents at Georgia Tech are watching videos to learn how to make sandwiches and complete other routine tasks.

According to Ashely Edwards, a recent computer science Ph.D. graduate, and lead author of the paper Imitating Latent Policies from Observation, there is a lot of data that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.

GT Computing alumna Ashley Edwards.

The new approach detailed in the paper uses imitation learning from observation to teach agents how to complete tasks like making a sandwich, playing a videogame, and even driving a car, all by watching videos. In most experiments, Edwards and her fellow researchers’ algorithm was able to complete a task in 200 to 300 steps, which Edwards said is a substantial improvement over existing methods.

 

“This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,” said Charles Isbell, dean designate of the College of Computing and paper co-author.

The new approach begins with an agent watching a video and guessing what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. Ultimately, once these agents are placed into an actual environment, they will be able to take what they have learned from watching videos and apply the knowledge to real-world actions.

In previous research using “imitation from observation,” humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.

“There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,” said Edwards.

For example, there are countless hours of dashcam footage from autonomous vehicles. The videos, however, rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.

During their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game, they performed tasks like getting a cart to balance a pole or teaching a mountain car to swing itself up to the top of a mountain. They also used classic control reinforcement learning problems in their experiments.

Their approach was able to beat the expert in two of the experiments and was considered “state-of-the-art” in all four.

Dean Designate Charles Isbell.

Despite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.

This research is one of 18 accepted papers from the Machine Learning Center at Georgia Tech’s (ML@GT) and was presented at the 36th Annual International Conference on Machine Learning (ICML) held June 9 through 15 in Long Beach, Calif.

IMLS, the board for ICML, is a financial supporter of AIhub. Neither ICML nor IMLS had editorial oversight of this article, and the opinions herein are not those of ICML or IMLS.




Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.
Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.




            AIhub is supported by:



Related posts :



Using machine learning to track greenhouse gas emissions

  15 Dec 2025
PhD candidate Julia Wąsala searches for greenhouse gas emissions in satellite data.

AAAI 2025 presidential panel on the future of AI research – video discussion on AGI

  12 Dec 2025
Watch the first in a series of video discussions from AAAI.

The Machine Ethics podcast: the AI bubble with Tim El-Sheikh

Ben chats to Tim about AI use cases, whether GenAI is even safe, the AI bubble, replacing human workers, data oligarchies and more.

Australia’s vast savannas are changing, and AI is showing us how

Improving decision-making for dynamic and rapidly changing environments.

AI language models show bias against regional German dialects

New study examines how artificial intelligence responds to dialect speech.

We asked teachers about their experiences with AI in the classroom — here’s what they said

  05 Dec 2025
Researchers interviewed teachers from across Canada and asked them about their experiences with GenAI in the classroom.

Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking

  04 Dec 2025
Find out more about this publicly-available, globally-diverse, consent-based human image dataset.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence