ΑΙhub.org
 

Learning to efficiently plan robust frictional multi-object grasps: interview with Wisdom Agboh

by
18 November 2022



share this:
Wisdom

In their paper, Learning to Efficiently Plan Robust Frictional Multi-Object Grasps, Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg trained a neural network to plan robust multi-object grasps. Wisdom summarises the key aspects of the work below:

What is the topic of the research in your paper?

When skilled waiters clear tables, they grasp multiple utensils and dishes in a single motion. On the other hand, robots in warehouses are inefficient and can only pick a single object at a time. This research leverages neural networks and fundamental robot grasping theorems to build an efficient robot system that grasps multiple objects at once.

Could you tell us about the implications of your research and why it is an interesting area for study?

To quickly deliver your online orders, amidst increasing demand and labour shortages, fast and efficient robot picking systems in warehouses have become indispensable. This research studies the fundamentals of multi-object robot grasping. It is easy for humans, yet extremely challenging for robots.

Robot arms grasping objectsThe decluttering problem (top) where objects must be transported to a packing box. Wisdom and colleagues found robust frictional multi-object grasps (bottom) to efficiently declutter the scene.

Could you explain your methodology?

We leverage a novel frictional multi-object grasping necessary condition to train MOG-Net, a neural network model using real examples. It predicts the number of objects grasped by a robot out of a target object group. We use MOG-Net in a novel robot grasp planner to quickly generate robust multi-object grasps.

In this video, you can see the robot grasping, using MOG-Net, in action.

What were your main findings?

In physical robot experiments, we found that MOG-Net is 220% faster and 16% more successful, compared to a single object picking system.

What further work are you planning in this area?

Can robots clear your breakfast table by grasping multiple dishes and utensils at once? Can they tidy your room floor by picking up multiple clothes at once? These are the exciting future research directions we will explore.

About Wisdom

Wisdom

Wisdom Agboh is a Research Fellow at the University of Leeds, and a Visiting Scholar at the University of California, Berkeley. He is an award-winning AI and robotics expert.

Read the research in full

Learning to Efficiently Plan Robust Frictional Multi-Object Grasps
Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg




AIhub Editor is dedicated to free high-quality information about AI.
AIhub Editor is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



What’s coming up at #AAAI2023?

Find out about the talks, workshops, tutorials, and other events scheduled at AAAI this year, taking place from 7 - 14 February.
03 February 2023, by

Science communication for AI researchers: our tutorial at #AAAI2023

Find out about our in-person hands-on course at AAAI.
02 February 2023, by

Riemannian score-based generative modelling

The winners of a NeurIPS 2022 best paper award write about their work on generative modelling.
01 February 2023, by

AIhub monthly digest: January 2023 – low-resource language projects, Earth’s nightlights and a Lanfrica milestone

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
31 January 2023, by

The Good Robot Podcast: featuring Abeba Birhane

In this episode, Eleanor and Kerry talk to Abeba Birhane about changing computing cultures.
30 January 2023, by

All questions answered: how CLAIRE shapes the future of AI in Europe

Watch the next in the series of CLAIRE's All Questions Answered (AQuA) events.
27 January 2023, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association