ΑΙhub.org
 

#NeurIPS2020 invited talks round-up: part one


by
11 December 2020



share this:
NeurIPS logo

There were seven interesting and varied invited talks at NeurIPS this year. Here, we summarise the first three, which were given by Charles Isbell (Georgia Tech), Jeff Shamma (King Abdullah University of Science and Technology) and Shafi Goldwasser (UC Berkeley, MIT and Weizmann Institute of Science).

Charles Isbell: You can’t escape hyperparameters and latent variables: machine learning as a software engineering enterprise

The invited talks kicked off in style with a presentation from Charles Isbell. He had posted a teaser on Twitter indicating that he was trying something new with the format, and it certainly did not disappoint. The talk received rave reviews during both the live chat channel and afterwards on social media.

Machine learning has reached the point where it is pervasive in our lives and, like other successful technological fields, must take responsibility for avoiding the harms associated with what it is producing.

Charles’ Christmas Carol-themed talk made full use of the virtual format and saw him visit Michael Littmann, who gave a consummate performance in the role of Scrooge (a blinkered machine learning theorist), to discuss algorithmic bias and how researchers can take steps to identify and combat this bias in their work. The ghosts of machine learning’s past, present and future came in the form of interviews with many researchers working across the discipline. Their insights included approaches that might help us to develop more robust machine-learning systems.

summary slide from Isbell talk at NeurIPS
The summary slide from Charles Isbell’s talk

Watch the talk here.


Jeff Shamma: Feedback control perspectives on learning

Jeff started with a definition of feedback control: real-time decision making in dynamic and uncertain environments. Feedback systems are a never-ending loop, with the steps of: sensing what is happening, deciding what to do, and acting. Areas where feedback control has been applied include: manufacturing, energy, biomedical, transportation, logistics, and communication. As feedback control is an enabling technology it does not perhaps receive the attention warranted for such ubiquity.

Feedback control is, of course, present in learning systems. For example, in reinforcement learning, an agent is in feedback with its environment.

During his talk, Jeff presented three benefits of feedback: 1) it can be used to stabilise and shape behaviour, 2) it can provide robustness to variation, 3) it enables tracking of command signals. These concepts were related to specific research problems in evolutionary game theory, no-regret learning, and multi-agent learning.

Summary slide from Shamma talk at NeurIPS
Concluding slide from Jeff Shamma’s talk

Watch the talk here.


Shafi Goldwasser: Robustness, verification, privacy: addressing machine learning adversaries

To start her talk, Shafi noted that her background is in cryptography. This field has had a big impact on the world, from electronic commerce to cryptocurrencies, from cloud computing to quantum computing. Experience in this area has allowed her to approach machine learning problems from a cryptographic standpoint.

Shafi’s talk focussed on three recent works, covering the topics of privacy, verification and robustness. For each case, cryptography inspired models were detailed; an important aim in each instance being to address the challenges presented by adversaries.

The three works presented in the talk were based on these topics:
1) Verifiability of machine learning models
2) Privacy during training: from prototype to large scale data
3) Robustness – accurate predictions when test distribution deviates from training distribution

Shafi Goldwasser talk intro slide
The three studies presented by Shafi Goldwasser in her talk

Watch the talk here.




tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association