ΑΙhub.org
 

Learning to efficiently plan robust frictional multi-object grasps: interview with Wisdom Agboh


by
18 November 2022



share this:
Wisdom

In their paper, Learning to Efficiently Plan Robust Frictional Multi-Object Grasps, Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg trained a neural network to plan robust multi-object grasps. Wisdom summarises the key aspects of the work below:

What is the topic of the research in your paper?

When skilled waiters clear tables, they grasp multiple utensils and dishes in a single motion. On the other hand, robots in warehouses are inefficient and can only pick a single object at a time. This research leverages neural networks and fundamental robot grasping theorems to build an efficient robot system that grasps multiple objects at once.

Could you tell us about the implications of your research and why it is an interesting area for study?

To quickly deliver your online orders, amidst increasing demand and labour shortages, fast and efficient robot picking systems in warehouses have become indispensable. This research studies the fundamentals of multi-object robot grasping. It is easy for humans, yet extremely challenging for robots.

Robot arms grasping objectsThe decluttering problem (top) where objects must be transported to a packing box. Wisdom and colleagues found robust frictional multi-object grasps (bottom) to efficiently declutter the scene.

Could you explain your methodology?

We leverage a novel frictional multi-object grasping necessary condition to train MOG-Net, a neural network model using real examples. It predicts the number of objects grasped by a robot out of a target object group. We use MOG-Net in a novel robot grasp planner to quickly generate robust multi-object grasps.

In this video, you can see the robot grasping, using MOG-Net, in action.

What were your main findings?

In physical robot experiments, we found that MOG-Net is 220% faster and 16% more successful, compared to a single object picking system.

What further work are you planning in this area?

Can robots clear your breakfast table by grasping multiple dishes and utensils at once? Can they tidy your room floor by picking up multiple clothes at once? These are the exciting future research directions we will explore.

About Wisdom

Wisdom

Wisdom Agboh is a Research Fellow at the University of Leeds, and a Visiting Scholar at the University of California, Berkeley. He is an award-winning AI and robotics expert.

Read the research in full

Learning to Efficiently Plan Robust Frictional Multi-Object Grasps
Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:



Related posts :

AI enables a Who’s Who of brown bears in Alaska

  18 Feb 2026
A team of scientists from EPFL and Alaska Pacific University has developed an AI program that can recognize individual bears in the wild, despite the substantial changes that occur in their appearance over the summer season.

Learning to see the physical world: an interview with Jiajun Wu

and   17 Feb 2026
Winner of the 2019 AAAI / ACM SIGAI dissertation award tells us about his current research.

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

  12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

From Visual Question Answering to multimodal learning: an interview with Aishwarya Agrawal

and   11 Feb 2026
We hear from Aishwarya about research that received a 2019 AAAI / ACM SIGAI Doctoral Dissertation Award honourable mention.

Governing the rise of interactive AI will require behavioral insights

  10 Feb 2026
Yulu Pi writes about her work that was presented at the conference on AI, ethics and society (AIES 2025).

AI is coming to Olympic judging: what makes it a game changer?

  09 Feb 2026
Research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence