ΑΙhub.org
 

Artificial intelligence agents begin to learn new skills from watching videos


by
29 June 2019



share this:

Whether it’s tying a tie, making slime, fixing a leaky faucet, or some other daily task, millions of people watch how-to videos to learn new skills. Now, artificially intelligent (AI) agents at Georgia Tech are watching videos to learn how to make sandwiches and complete other routine tasks.

According to Ashely Edwards, a recent computer science Ph.D. graduate, and lead author of the paper Imitating Latent Policies from Observation, there is a lot of data that can be used more efficiently for teaching robots and artificial agents how to do a variety of tasks.

GT Computing alumna Ashley Edwards.

The new approach detailed in the paper uses imitation learning from observation to teach agents how to complete tasks like making a sandwich, playing a videogame, and even driving a car, all by watching videos. In most experiments, Edwards and her fellow researchers’ algorithm was able to complete a task in 200 to 300 steps, which Edwards said is a substantial improvement over existing methods.

 

“This approach is exciting because it unpeels another layer for how we can train artificial agents to work with humans. We have hardly skimmed the surface of this problem space, but this is a great next step,” said Charles Isbell, dean designate of the College of Computing and paper co-author.

The new approach begins with an agent watching a video and guessing what actions are being taken. In the paper, this is referred to as a latent policy. Given that guess, the agent tries to predict movements and learn what to do. Ultimately, once these agents are placed into an actual environment, they will be able to take what they have learned from watching videos and apply the knowledge to real-world actions.

In previous research using “imitation from observation,” humans must physically show agents how to do an action or train a computer to use a dynamic model to learn how to do a new task, both of which are time-consuming, expensive, and potentially dangerous.

“There are thousands of videos out there documenting people doing things, but it can be hard to know what they are doing in a way that can be applied to artificial systems,” said Edwards.

For example, there are countless hours of dashcam footage from autonomous vehicles. The videos, however, rarely have detailed telemetry information about the vehicle, like what angle the steering wheel was pointed when the car moved a certain way. Edwards and her team hope that their algorithm will be able to analyze video footage and piece together not only how to do an action, but why.

During their research, Edwards and her co-authors performed four experiments to prove their idea. Using a platform game, they performed tasks like getting a cart to balance a pole or teaching a mountain car to swing itself up to the top of a mountain. They also used classic control reinforcement learning problems in their experiments.

Their approach was able to beat the expert in two of the experiments and was considered “state-of-the-art” in all four.

Dean Designate Charles Isbell.

Despite its achievements, the current model is only created for discrete actions like moving right, left, forward or backward one step at a time. So, Edwards and her team are continuing to push their work forward toward being able to achieve smoother and more continuous actions for their models.

This research is one of 18 accepted papers from the Machine Learning Center at Georgia Tech’s (ML@GT) and was presented at the 36th Annual International Conference on Machine Learning (ICML) held June 9 through 15 in Long Beach, Calif.

IMLS, the board for ICML, is a financial supporter of AIhub. Neither ICML nor IMLS had editorial oversight of this article, and the opinions herein are not those of ICML or IMLS.




Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.
Allie McFadden is the communications officer for the Machine Learning Center at Georgia Tech and the Constellations Center for Equity in Computing at Georgia Tech.




            AIhub is supported by:


Related posts :



Copilot Arena: A platform for code

  28 Apr 2025
Copilot Arena is an app designed to evaluate LLMs in real-world settings by collecting preferences directly in a developer’s actual workflow.

Dataset reveals how Reddit communities are adapting to AI

  25 Apr 2025
Researchers at Cornell Tech have released a dataset extracted from more than 300,000 public Reddit communities.

Interview with Eden Hartman: Investigating social choice problems

  24 Apr 2025
Find out more about research presented at AAAI 2025.

The Machine Ethics podcast: Co-design with Pinar Guvenc

This episode, Ben chats to Pinar Guvenc about co-design, whether AI ready for society and society is ready for AI, what design is, co-creation with AI as a stakeholder, bias in design, small language models, and more.

Why AI can’t take over creative writing

  22 Apr 2025
A large language model tries to generate what a random person who had produced the previous text would produce.

Interview with Amina Mević: Machine learning applied to semiconductor manufacturing

  17 Apr 2025
Find out how Amina is using machine learning to develop an explainable multi-output virtual metrology system.

Images of AI – between fiction and function

“The currently pervasive images of AI make us look somewhere, at the cost of somewhere else.”

Grace Wahba awarded the 2025 International Prize in Statistics

  16 Apr 2025
Her contributions laid the foundation for modern statistical techniques that power machine learning algorithms such as gradient boosting and neural networks.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association