ΑΙhub.org
 

Summarising the keynotes at ICLR: part one


by
14 May 2020



share this:

ICLR website
The virtual International Conference on Learning Representations (ICLR) was held on 26-30 April and included eight keynote talks, with a wide range of topics covered. In this post we summarise the first four presentations. Courtesy of the conference organisers you can watch the talks in full and see the question and answer sessions too.

AI + Africa = global innovation

Aisha Walcott-Bryant, IBM Research Africa, Nairobi

Africa has a population of over one billion people, over 3000 ethnic groups, and over 2000 different languages. This rich diversity offers an excellent opportunity to address complex research questions within the African continent. Research in Africa within the AI space can have global impact.

In her talk, Aisha focused on AI in global health. The key question for her, and her research lab is: how can we use AI to transform health systems and improve global health? These two specific, important areas of research were covered in the presentation:

1) Decision support for disease intervention planning – case study: malaria in Uganda

In 2018 there were more than 400,000 deaths from malaria, with 90% of cases occurring in sub-Saharan Africa. It is important for policy makers to have as much information as possible at their fingertips when deciding on the best course of action for reducing malaria cases in their region. Aisha’s research delves into AI methods for creating intervention models with the aim of producing cost-effective action plans. In the case of malaria this involves determining the best strategy for deploying mosquito nets and indoor spraying of insecticides.

The most effective models are those where the plans can vary over time (multi-step intervention planning), and here the researchers built a Monte Carlo tree search, ran simulations and used backpropagation to find the most cost-effective path through the space. Aisha stressed the importance of making the model explainable. Importantly, the model is replicable and scalable across many different diseases (including COVID-19 and AIDS/HIV).

2) Characterising sub-populations to understand health outcomes

The talk covered two areas of study: a) family planning – investigating contraceptive use, b) maternal and new-born child health. Here, machine learning methods are used to find patterns in data as a function of sub populations. Armed with this information decision makers can make informed targeted interventions.

You can view the talk and the Q&A session here.

Doing for our robots what nature did for us

Leslie Kaelbling, MIT

In her keynote, Leslie spoke about machine learning and robots and, in particular, the priors that are built into these systems. By way of analogy she suggested thinking about the priors that nature built into us. Her overall research goal is to understand the computational mechanisms necessary to produce a general-purpose intelligent robot.

When considering the best programme to use for a particular robot, one needs to keep in mind that there is a huge variability to what the robot might encounter in terms of tasks and environments. Given some particular hardware, what is the best possible programme for that robot? Some robots are designed to do a very specific task, and they would need a very different programme to another robot that was designed to carry out a variety of tasks in different environments.

Leslie thinks about this problem in terms of an expectation value over the possible domains that the robot might find itself. Those domains include the dynamics of its environment, the horizon (how long it will operate for) and its reward for completing a particular tasks. Researchers work to find a programme that works best over the domain. There will be an optimal policy for each robot. However, finding that optimal policy is very hard task indeed. One needs to pick a model class and methodology that works best whilst minimising the total construction and training cost. Leslie spoke about her work in this field and her methodology for producing programmes for robots.

There were some interesting videos of various robots in action during the presentation. In one, the robot is shown performing various Meta-World tasks. (Meta-World is an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. These include free-space motion, picking, moving, placing, touching and pushing objects.) Using a multi-modal motion planning method the research team were able to successfully perform 43 of the 50 tasks.

You can view the talk and the Q&A session here.

2020 vision: reimagining the default settings of technology & society

Ruha Benjamin, Princeton University

Ruha began her talk with a quote from Martin Luther King: “We have guided missiles and misguided men”, making the point that we invest heavily in technology, but too often neglect the underlying social and moral compass that should underpin these technologies.

She noted that there seem to be two narratives predominant in society with regards to technology: one, that technology is going to save us, and the other that technology is going to slay us. Although these seem like contrasting views, they actually share the same logic: that technology is in control and that humans are just affected by it. Obviously, in reality, it is humans who create and shape these technologies and it is our responsibility to consider more than data sets when deploying systems: “computational depth without historic or sociological depth is superficial learning.” We should be creating technology that is humane, fair, just and equitable. Researchers should also be aware of the bigger picture: they may only be working on a narrow part of a system but should still question what the whole system will be used for.

Ruha gave some specific examples of current technologies where the design (either purposefully or due to lack of social and historical consideration) has led to systems that discriminate against particular groups of people (be that racial groups, poor communities or other vulnerable groups). One such example concerns healthcare in the US, where commercial algorithms are deployed to guide health decisions. Researchers found evidence of racial bias in one widely used algorithm, such that black patients assigned the same level of risk by the algorithm are sicker than white patients. The bias occurs because the algorithm uses health costs as a proxy for health needs. As less money is spent on black patients who have the same level of need, the algorithm thus falsely concludes that black patients are healthier than equally sick white patients. You can read Ruha’s review of the research here.

To conclude her talk Ruha offered this final proposition: “If inequity is woven into the very fabric of society then each twist, coil, and code is a chance for us to weave new patterns, practices and politics. The vastness of the problem will be its undoing once we accept that we are pattern makers. An ahistoric and asocial approach to deep learning can capture and contain, can harm people. An historically and sociologically grounded approach can open up possibilities. It can create new settings. It can encode new values and build on critical intellectual traditions that have continually developed insights and strategies grounded in justice. My hope is we all find ways to build on that tradition”.

You can view the talk and the Q&A session here.

Listen to Ruha speak more in a recent episode of the Radical AI podcast:
Love, Challenge, and Hope: Building a Movement to Dismantle the New Jim Code with Ruha Benjamin

Invertible models and normalizing flows

Laurent Dinh, Google AI

Laurent presented a personal retrospective on the topic of invertible models and normalizing flows, based on both his work and the work of the community in general. He started by recommending a number of tutorials on the topic, which provide the background to his talk:
Flow-based deep generative models, by Lilian Wang
Normalizing flows tutorial, by Eric Jang
Normalizing Flows: An introduction and review of current methods, by Ivan Kobyzev, Simon J.D. Prince, Marcus A. Brubaker.
Normalizing Flows for probabilistic modeling and inference, by George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan.
What are normalizing flows?, video by Ari Seff.
Deep unsupervised learning course form UC Berkeley.

Laurent took us on a brief tour of Boltzmann machines, autoregressive models and the generative network paradigm. One of the key research questions for him was “is there a tractable maximum likelihood approach to train generator networks?” That question motivated him to start a journey into investigating invertible neural networks. You can read his latest work in this area here.

You can view the talk and the Q&A session here.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



New AI tool generates realistic satellite images of future flooding

  24 Dec 2024
The method could help communities visualize and prepare for approaching storms.

2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association