ΑΙhub.org
 

Making sense of vision and touch: #ICRA2019 best paper award video and interview

by
28 July 2019



share this:

PhD candidate Michelle A. Lee from the Stanford AI Lab won the best paper award at ICRA 2019 with her work “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks”. You can read the paper on arxiv here.

Audrow Nash was there to capture her pitch.

And here’s the official video about the work.

Full reference
Lee, Michelle A., Yuke Zhu, Krishnan Srinivasan, Parth Shah, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Jeannette Bohg. “Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich tasks.” arXiv preprint arXiv:1810.10191 (2018).




AIhub Editor is dedicated to free high-quality information about AI.
AIhub Editor is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



AIhub monthly digest: January 2023 – low-resource language projects, Earth’s nightlights and a Lanfrica milestone

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
31 January 2023, by

The Good Robot Podcast: featuring Abeba Birhane

In this episode, Eleanor and Kerry talk to Abeba Birhane about changing computing cultures.
30 January 2023, by

All questions answered: how CLAIRE shapes the future of AI in Europe

Watch the next in the series of CLAIRE's All Questions Answered (AQuA) events.
27 January 2023, by

UrbanTwin: seeing double for sustainability

A digital twin for urban infrastructure: assessing the effectiveness of climate-related policies and actions.
26 January 2023, by

Counterfactual explanations for land cover mapping: interview with Cassio Dantas

Cassio tells us about work applying counterfactual explanations to remote sensing time series data for land-cover mapping classification.
25 January 2023, by





©2021 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association