ΑΙhub.org
 

Learning to efficiently plan robust frictional multi-object grasps: interview with Wisdom Agboh


by
18 November 2022



share this:
Wisdom

In their paper, Learning to Efficiently Plan Robust Frictional Multi-Object Grasps, Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg trained a neural network to plan robust multi-object grasps. Wisdom summarises the key aspects of the work below:

What is the topic of the research in your paper?

When skilled waiters clear tables, they grasp multiple utensils and dishes in a single motion. On the other hand, robots in warehouses are inefficient and can only pick a single object at a time. This research leverages neural networks and fundamental robot grasping theorems to build an efficient robot system that grasps multiple objects at once.

Could you tell us about the implications of your research and why it is an interesting area for study?

To quickly deliver your online orders, amidst increasing demand and labour shortages, fast and efficient robot picking systems in warehouses have become indispensable. This research studies the fundamentals of multi-object robot grasping. It is easy for humans, yet extremely challenging for robots.

Robot arms grasping objectsThe decluttering problem (top) where objects must be transported to a packing box. Wisdom and colleagues found robust frictional multi-object grasps (bottom) to efficiently declutter the scene.

Could you explain your methodology?

We leverage a novel frictional multi-object grasping necessary condition to train MOG-Net, a neural network model using real examples. It predicts the number of objects grasped by a robot out of a target object group. We use MOG-Net in a novel robot grasp planner to quickly generate robust multi-object grasps.

In this video, you can see the robot grasping, using MOG-Net, in action.

What were your main findings?

In physical robot experiments, we found that MOG-Net is 220% faster and 16% more successful, compared to a single object picking system.

What further work are you planning in this area?

Can robots clear your breakfast table by grasping multiple dishes and utensils at once? Can they tidy your room floor by picking up multiple clothes at once? These are the exciting future research directions we will explore.

About Wisdom

Wisdom

Wisdom Agboh is a Research Fellow at the University of Leeds, and a Visiting Scholar at the University of California, Berkeley. He is an award-winning AI and robotics expert.

Read the research in full

Learning to Efficiently Plan Robust Frictional Multi-Object Grasps
Wisdom C. Agboh, Satvik Sharma, Kishore Srinivas, Mallika Parulekar, Gaurav Datta, Tianshuang Qiu, Jeffrey Ichnowski, Eugen Solowjow, Mehmet Dogar and Ken Goldberg




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Stuart J. Russell wins 2025 AAAI Award for Artificial Intelligence for the Benefit of Humanity

  04 Feb 2025
Stuart will give an invited talk about his work at AAAI 2025.

Forthcoming machine learning and AI seminars: February 2025 edition

  03 Feb 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 February and 31 March 2025.

Hanna Barakat’s image collection & the paradoxes of depicting diversity in AI history

  31 Jan 2025
Read about Hanna's artistic process and reflections upon creating new images about AI

A deep learning pipeline for controlling protein interactions

  30 Jan 2025
Scientists have used deep learning to design new proteins that bind to complexes involving other small molecules like hormones or drugs.
monthly digest

AIhub monthly digest: January 2025 – artists’ perspectives on GenAI, biomedical knowledge graphs, and ML for studying greenhouse gas emissions

  29 Jan 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Public competition for better images of AI – winners announced!

  28 Jan 2025
See the winning images from the Better Images of AI and Cambridge Diversity Fund competition.

Translating fiction: how AI could assist humans in expanding access to global literature and culture

  27 Jan 2025
Dutch publishing house Veen Bosch & Keuning (VBK) has confirmed plans to experiment using AI to translate fiction.

Interview with Yuki Mitsufuji: Improving AI image generation

  23 Jan 2025
Find out about two pieces of research tackling different aspects of image generation.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association