ΑΙhub.org
 

Interview with Eleni Vasilaki – talking bio-inspired machine learning

by
25 February 2021



share this:
Eleni Vasilaki

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.

Could you tell us about your research area?

I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation. One particular example is reservoir computing. It is loosely inspired by the brain, though it is an elegant model that uses randomness to simplify the problem to be solved. The reservoir per se is a collection of randomly connected neurons but can be replaced by any medium with sufficient complex dynamics that can be used for computation, e.g, a magnetic material. This line of work I jointly pursue with colleagues in Materials Science and Engineering. The reservoir requires very simple learning methods, and I am currently investigating how to make these methods more effective.

How did you develop an interest in neurocomputing and machine learning?

I have been fascinated by how biological organisms learn, and to me, understanding means that you can capture the principles in mathematical models or physical systems, hence my interest in bio-inspired machine learning and neuromorphic computing. My relationship with artificial neural networks started in 1999, and I started using them in the context of computational neuroscience. Soon after I discovered there was a community, building brain-inspired hardware, with members of which I have been collaborating and co-authored papers in the last 5 years.

Could you tell us what some of the implications of your research are?

My earlier work has shed light on the interpretation of connectivity motifs in the brain and in the mechanisms that shape them. My more recent work on biologically plausible reinforcement learning demonstrated, in the context of insect brains, that behaviours we perceive as complex may have very simple underlying principles. Ongoing work on reservoir computing with materials will hopefully lead to powerful learning methods for reservoirs and, in the more distant future, low power hardware designs.

Could you explain the relationship between insects and machine learning?

Insects process information and learn based on data they collect via experience, and, in a way, so do machine learning methods. However, insects, unlike our best machine learning techniques, are capable of a broad variety of tasks that they seem to perform reasonably well with very little computational power and very few neurons! As an example, the mushroom body of a fruit fly has about 2,500 neurons only (100K in the whole brain). Reservoir computing, for instance, could be seen as a simplified model of the mushroom body.

Biological inspiration is certainly not the only way to improve machine learning, but the most popular method at the moment, i.e. deep learning, is vaguely inspired by biological neural networks. There may be a lesson to learn from studying insects in terms of efficiency.

Could you explain how “active learning” relates to artificial intelligence?

In machine learning, “active learning” refers to the methods that query the user to label specific data, in order to guide the learning process. We might say that the algorithm knows what it wants to know. Similarly, active learning from the point of psychology refers to a learner who is not a passive receiver but interacts with the learning process. Insects do that as well, they interact with their environment and explore it. In realistic scenarios they are not simply passive learners and therefore how insects learn could be an inspiration for modern AI.

Do you have any recommendations for those who are interested in learning more about bio-inspired artificial intelligence?

I believe that understanding artificial neural networks/deep learning provides a good technical background, but one should realise that these tools, though fascinating and powerful, do not correspond faithfully to how the brain works. Fundamental courses in computational neuroscience and in particular synaptic plasticity are helpful. I would suggest the two books I used myself to get into the topic: Spiking Neural Networks (Gerstner and Kistler) and Theoretical Neuroscience (Dayan and Abbott).

References

On connectivity motifs

Clopath C, Buesing L, Vasilaki E, and Gerstner W (2010), Connectivity reflects Coding: A Model of Voltage-based Spike-Timing-Dependant-Plasticity with Homeostasis. Nature Neuroscience.
Vasilaki E, & Gugliano M (2014), Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses, PLoS ONE.

On insect learning

Cope A, Vasilaki E, Minors D, Sabo C, Marshall JAR and Barron AB, (2018) Abstract concept learning in a simple neural network inspired by the insect brain. PLOS Computational Biology.

On reservoir computing

Manneschi L, Ellis MO, Gigante G, Lin AC, Del Guidice P and Vasilaki E (2020) Exploiting multiple timescales in hierarchical echo state networks, Frontiers in Applied Mathematics and Statistics.
Manneschi L, Lin AC and Vasilaki E (2021) SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations, arXiv.

See Eleni Vasilaki’s webpage here.




Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.
Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.




            AIhub is supported by:


Related posts :



The machine learning victories at the 2024 Nobel Prize Awards and how to explain them

Anna Demming delves into the details behind the prizes.
08 November 2024, by

Why ChatGPT struggles with math

Have you ever tried to use an AI tool like ChatGPT to do some math and found it doesn’t always add up?
07 November 2024, by

VQAScore: Evaluating and improving vision-language generative models

We introduce a new evaluation metric and benchmark dataset for automated evaluation of text-to-visual generative models.
06 November 2024, by

Harnessing AI for a climate-resilient Africa: An interview with Amal Nammouchi, co-founder of AfriClimate AI

We spoke to Amal about how AfriClimate AI started, and the projects and initiatives that the team are focussing on.
05 November 2024, by

Forthcoming machine learning and AI seminars: November 2024 edition

A list of free-to-attend AI-related seminars that are scheduled to take place between 4 November and 31 December 2024.
04 November 2024, by

The Machine Ethics podcast: Socio-technical systems with Lisa Talia Moretti

In this episode Ben chats to Lisa about data and AI literacy, data governance, ethical frameworks, and more.
01 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association