ΑΙhub.org
 

#NeurIPS2020 invited talks round-up: part three – causal learning and the genomic bottleneck


by
26 March 2021



share this:
NeurIPS logo

In this post we conclude our summaries of the NeurIPS invited talks from the 2020 meeting. In this final instalment, we cover the talks by Marloes Maathuis (ETH Zurich) and Anthony M Zador (Cold Spring Harbor Laboratory).

Marloes Maathuis: Causal learning

Marloes began her talk on causal learning with a simple example of the phenomenon known as Simpson’s paradox, in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. She also talked about the importance of considering causality when making decisions based on such data.

simpson's paradox and causality
Slide from the introductory part of Marloes talk where she discussed Simpson’s paradox and causality.

Marloes went on to explain the difference between causal and non-causal questions. Non-causal questions are about predictions in the same system, for example, predicting the cancer rate among smokers. Causal questions, on the other hand, are about the mechanism behind the data or about predictions after an intervention to the system. For example, asking if smoking causes lung cancer, or predicting the spread of a virus epidemic after imposing new regulations.

Causal questions are ideally answered by randomised controlled experiments. However, sometimes it is not possible to carry out these experiments, so we need to estimate causal effects from observational data. Marloes described her research into determining methodology, using causal directed acyclic graphs (DAGs) to estimate such causal effects.

In the final part of her presentation, Marloes explained the methodology used when the causal graph is unknown. One possible approach is to hypothesize possible DAGs. Another approach is to learn the DAG from the data.

To find out more you can watch the talk in full here.


Anthony M Zador: The genomic bottleneck: a lesson from biology

Anthony spoke about the innate abilities that animals have and argued that most animal behaviour is not the result of clever learning algorithms, but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Examples of innate ability include birds making species-specific nests and beavers building dams. Having these abilities as innate provides an evolutionary advantage.

Innate structure - slide from Anthony Zador's talk
Slide from Anthony’s talk – innate structure provides an evolutionary advantage.

In the talk, Anthony outlined the number of parameters it takes to wire the brains of different creatures. For C elegans (a type of worm), the simplest animal studied, 302 neurons with 7000 synapses are needed. The genome of C elegans consists of about 200 million bits (where two bits make a nucleotide). These 200 million bits are easily enough to specify the precise wiring of 7000 synapses.

Compare this to a human brain: we have roughly 1011 neurons and 1014 synapses. It is estimated that it takes about 1015 bits to specify a human brain. However, our genome is only 109 bits. Anthony explained this missing factor of 106: the genome doesn’t specify every single synapse, rather, it specifies rules for wiring up the brain.

This led onto discussion of the notion that the wiring diagram needs to be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward AI architectures capable of rapid learning and in the final part of his talk, Anthony outlined some of the research that he is carrying out in this area.

Watch the talk here.




tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association