ΑΙhub.org
 

Making sense of vision and touch: #ICRA2019 best paper award video and interview


by
28 July 2019



share this:

PhD candidate Michelle A. Lee from the Stanford AI Lab won the best paper award at ICRA 2019 with her work “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks”. You can read the paper on arxiv here.

Audrow Nash was there to capture her pitch.

And here’s the official video about the work.

Full reference
Lee, Michelle A., Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Jeannette Bohg. “Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks.” arXiv preprint arXiv:1810.10191 (2018).




AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



Smart microscope captures aggregation of misfolded proteins

  07 Aug 2025
EPFL researchers have developed a microscope that can predict the onset of misfolded protein aggregation.

Interview with Shaghayegh (Shirley) Shajarian: Applying generative AI to computer networks

  05 Aug 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.

How AI can help protect bees from dangerous parasites

  04 Aug 2025
Tiny but mighty, honeybees play a crucial role in our ecosystems, pollinating various plants and crops.

The Machine Ethics podcast: AI Ethics, Risks and Safety Conference 2025

Listen to a special episode recorded at the AI Ethics, Risks and Safety Conference.

Interview with Aneesh Komanduri: Causality and generative modeling

  31 Jul 2025
Read the latest interview in our series featuring the AAAI/SIGAI Doctoral Consortium participants.
monthly digest

AIhub monthly digest: July 2025 – RoboCup round-up, ICML in Vancouver, and leveraging feedback in human-robot interactions

  30 Jul 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Interview with Yuki Mitsufuji: Text-to-sound generation

  29 Jul 2025
We hear from Sony AI Lead Research Scientist Yuki Mitsufuji to find out more about his latest research.

Open-source Swiss language model to be released this summer

  29 Jul 2025
This summer, EPFL and ETH Zurich will release a large language model (LLM) developed on public infrastructure.



 

AIhub is supported by:






©2025.05 - Association for the Understanding of Artificial Intelligence


 












©2025.05 - Association for the Understanding of Artificial Intelligence