ΑΙhub.org
 

Interview with Eleni Vasilaki – talking bio-inspired machine learning


by
25 February 2021



share this:
Eleni Vasilaki

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.

Could you tell us about your research area?

I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation. One particular example is reservoir computing. It is loosely inspired by the brain, though it is an elegant model that uses randomness to simplify the problem to be solved. The reservoir per se is a collection of randomly connected neurons but can be replaced by any medium with sufficient complex dynamics that can be used for computation, e.g, a magnetic material. This line of work I jointly pursue with colleagues in Materials Science and Engineering. The reservoir requires very simple learning methods, and I am currently investigating how to make these methods more effective.

How did you develop an interest in neurocomputing and machine learning?

I have been fascinated by how biological organisms learn, and to me, understanding means that you can capture the principles in mathematical models or physical systems, hence my interest in bio-inspired machine learning and neuromorphic computing. My relationship with artificial neural networks started in 1999, and I started using them in the context of computational neuroscience. Soon after I discovered there was a community, building brain-inspired hardware, with members of which I have been collaborating and co-authored papers in the last 5 years.

Could you tell us what some of the implications of your research are?

My earlier work has shed light on the interpretation of connectivity motifs in the brain and in the mechanisms that shape them. My more recent work on biologically plausible reinforcement learning demonstrated, in the context of insect brains, that behaviours we perceive as complex may have very simple underlying principles. Ongoing work on reservoir computing with materials will hopefully lead to powerful learning methods for reservoirs and, in the more distant future, low power hardware designs.

Could you explain the relationship between insects and machine learning?

Insects process information and learn based on data they collect via experience, and, in a way, so do machine learning methods. However, insects, unlike our best machine learning techniques, are capable of a broad variety of tasks that they seem to perform reasonably well with very little computational power and very few neurons! As an example, the mushroom body of a fruit fly has about 2,500 neurons only (100K in the whole brain). Reservoir computing, for instance, could be seen as a simplified model of the mushroom body.

Biological inspiration is certainly not the only way to improve machine learning, but the most popular method at the moment, i.e. deep learning, is vaguely inspired by biological neural networks. There may be a lesson to learn from studying insects in terms of efficiency.

Could you explain how “active learning” relates to artificial intelligence?

In machine learning, “active learning” refers to the methods that query the user to label specific data, in order to guide the learning process. We might say that the algorithm knows what it wants to know. Similarly, active learning from the point of psychology refers to a learner who is not a passive receiver but interacts with the learning process. Insects do that as well, they interact with their environment and explore it. In realistic scenarios they are not simply passive learners and therefore how insects learn could be an inspiration for modern AI.

Do you have any recommendations for those who are interested in learning more about bio-inspired artificial intelligence?

I believe that understanding artificial neural networks/deep learning provides a good technical background, but one should realise that these tools, though fascinating and powerful, do not correspond faithfully to how the brain works. Fundamental courses in computational neuroscience and in particular synaptic plasticity are helpful. I would suggest the two books I used myself to get into the topic: Spiking Neural Networks (Gerstner and Kistler) and Theoretical Neuroscience (Dayan and Abbott).

References

On connectivity motifs

Clopath C, Buesing L, Vasilaki E, and Gerstner W (2010), Connectivity reflects Coding: A Model of Voltage-based Spike-Timing-Dependant-Plasticity with Homeostasis. Nature Neuroscience.
Vasilaki E, & Gugliano M (2014), Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses, PLoS ONE.

On insect learning

Cope A, Vasilaki E, Minors D, Sabo C, Marshall JAR and Barron AB, (2018) Abstract concept learning in a simple neural network inspired by the insect brain. PLOS Computational Biology.

On reservoir computing

Manneschi L, Ellis MO, Gigante G, Lin AC, Del Guidice P and Vasilaki E (2020) Exploiting multiple timescales in hierarchical echo state networks, Frontiers in Applied Mathematics and Statistics.
Manneschi L, Lin AC and Vasilaki E (2021) SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations, arXiv.

See Eleni Vasilaki’s webpage here.




Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.
Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

A history of RoboCup with Manuela Veloso

  24 Mar 2026
Find out how RoboCup got started and how the competition has evolved, from one of the co-founders.

Information-driven design of imaging systems

  23 Mar 2026
Framework that enables direct evaluation and optimization of imaging systems based on their information content.

Machine learning framework to predict global imperilment status of freshwater fish

  20 Mar 2026
“With our model, decision makers can deploy resources in advance before a species becomes imperiled.”

Interview with AAAI Fellow Yan Liu: machine learning for time series

  19 Mar 2026
Hear from 2026 AAAI Fellow Yan Liu about her research into time series, the associated applications, and the promise of physics-informed models.

A principled approach for data bias mitigation

  18 Mar 2026
Find out more about work presented at AIES 2025 which proposes a new way to measure data bias, along with a mitigation algorithm with mathematical guarantees.

An AI image generator for non-English speakers

  17 Mar 2026
"Translations lose the nuances of language and culture, because many words lack good English equivalents."

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence