ΑΙhub.org
 

Artificial intelligence agents begin to learn new skills from watching videos


by
29 June 2019



share this:

Whether it’s tying a tie, making slime, fixing a leaky faucet, or some other daily task, millions of people watch how-to videos to learn new skills. Now, artificially intelligent (AI) agents at Georgia Tech are watching videos to learn how to make sandwiches and complete other routine tasks.

According to Ashely Edwards, a recent computer science Ph.D. graduate, and lead author of the paper Imitating Latent Policies from Observation, there is a lot of data that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.

GT Computing alumna Ashley Edwards.

The new approach detailed in the paper uses imitation learning from observation to teach agents how to complete tasks like making a sandwich, playing a videogame, and even driving a car, all by watching videos. In most experiments, Edwards and her fellow researchers’ algorithm was able to complete a task in 200 to 300 steps, which Edwards said is a substantial improvement over existing methods.

 

“This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,” said Charles Isbell, dean designate of the College of Computing and paper co-author.

The new approach begins with an agent watching a video and guessing what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. Ultimately, once these agents are placed into an actual environment, they will be able to take what they have learned from watching videos and apply the knowledge to real-world actions.

In previous research using “imitation from observation,” humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.

“There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,” said Edwards.

For example, there are countless hours of dashcam footage from autonomous vehicles. The videos, however, rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.

During their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game, they performed tasks like getting a cart to balance a pole or teaching a mountain car to swing itself up to the top of a mountain. They also used classic control reinforcement learning problems in their experiments.

Their approach was able to beat the expert in two of the experiments and was considered “state-of-the-art” in all four.

Dean Designate Charles Isbell.

Despite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.

This research is one of 18 accepted papers from the Machine Learning Center at Georgia Tech’s (ML@GT) and was presented at the 36th Annual International Conference on Machine Learning (ICML) held June 9 through 15 in Long Beach, Calif.

IMLS, the board for ICML, is a financial supporter of AIhub. Neither ICML nor IMLS had editorial oversight of this article, and the opinions herein are not those of ICML or IMLS.




Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.
Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.




            AIhub is supported by:



Related posts :

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

  12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

From Visual Question Answering to multimodal learning: an interview with Aishwarya Agrawal

and   11 Feb 2026
We hear from Aishwarya about research that received a 2019 AAAI / ACM SIGAI Doctoral Dissertation Award honourable mention.

Governing the rise of interactive AI will require behavioral insights

  10 Feb 2026
Yulu Pi writes about her work that was presented at the conference on AI, ethics and society (AIES 2025).

AI is coming to Olympic judging: what makes it a game changer?

  09 Feb 2026
Research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  06 Feb 2026
Sven honoured for his work on AI planning and search.

Congratulations to the #AAAI2026 award winners

  05 Feb 2026
Find out who has won the prestigious 2026 awards for their contributions to the field.

Forthcoming machine learning and AI seminars: February 2026 edition

  04 Feb 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 4 February and 31 March 2026.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence