ΑΙhub.org
 

Interview with Eleni Vasilaki – talking bio-inspired machine learning


by
25 February 2021



share this:
Eleni Vasilaki

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.

Could you tell us about your research area?

I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation. One particular example is reservoir computing. It is loosely inspired by the brain, though it is an elegant model that uses randomness to simplify the problem to be solved. The reservoir per se is a collection of randomly connected neurons but can be replaced by any medium with sufficient complex dynamics that can be used for computation, e.g, a magnetic material. This line of work I jointly pursue with colleagues in Materials Science and Engineering. The reservoir requires very simple learning methods, and I am currently investigating how to make these methods more effective.

How did you develop an interest in neurocomputing and machine learning?

I have been fascinated by how biological organisms learn, and to me, understanding means that you can capture the principles in mathematical models or physical systems, hence my interest in bio-inspired machine learning and neuromorphic computing. My relationship with artificial neural networks started in 1999, and I started using them in the context of computational neuroscience. Soon after I discovered there was a community, building brain-inspired hardware, with members of which I have been collaborating and co-authored papers in the last 5 years.

Could you tell us what some of the implications of your research are?

My earlier work has shed light on the interpretation of connectivity motifs in the brain and in the mechanisms that shape them. My more recent work on biologically plausible reinforcement learning demonstrated, in the context of insect brains, that behaviours we perceive as complex may have very simple underlying principles. Ongoing work on reservoir computing with materials will hopefully lead to powerful learning methods for reservoirs and, in the more distant future, low power hardware designs.

Could you explain the relationship between insects and machine learning?

Insects process information and learn based on data they collect via experience, and, in a way, so do machine learning methods. However, insects, unlike our best machine learning techniques, are capable of a broad variety of tasks that they seem to perform reasonably well with very little computational power and very few neurons! As an example, the mushroom body of a fruit fly has about 2,500 neurons only (100K in the whole brain). Reservoir computing, for instance, could be seen as a simplified model of the mushroom body.

Biological inspiration is certainly not the only way to improve machine learning, but the most popular method at the moment, i.e. deep learning, is vaguely inspired by biological neural networks. There may be a lesson to learn from studying insects in terms of efficiency.

Could you explain how “active learning” relates to artificial intelligence?

In machine learning, “active learning” refers to the methods that query the user to label specific data, in order to guide the learning process. We might say that the algorithm knows what it wants to know. Similarly, active learning from the point of psychology refers to a learner who is not a passive receiver but interacts with the learning process. Insects do that as well, they interact with their environment and explore it. In realistic scenarios they are not simply passive learners and therefore how insects learn could be an inspiration for modern AI.

Do you have any recommendations for those who are interested in learning more about bio-inspired artificial intelligence?

I believe that understanding artificial neural networks/deep learning provides a good technical background, but one should realise that these tools, though fascinating and powerful, do not correspond faithfully to how the brain works. Fundamental courses in computational neuroscience and in particular synaptic plasticity are helpful. I would suggest the two books I used myself to get into the topic: Spiking Neural Networks (Gerstner and Kistler) and Theoretical Neuroscience (Dayan and Abbott).

References

On connectivity motifs

Clopath C, Buesing L, Vasilaki E, and Gerstner W (2010), Connectivity reflects Coding: A Model of Voltage-based Spike-Timing-Dependant-Plasticity with Homeostasis. Nature Neuroscience.
Vasilaki E, & Gugliano M (2014), Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses, PLoS ONE.

On insect learning

Cope A, Vasilaki E, Minors D, Sabo C, Marshall JAR and Barron AB, (2018) Abstract concept learning in a simple neural network inspired by the insect brain. PLOS Computational Biology.

On reservoir computing

Manneschi L, Ellis MO, Gigante G, Lin AC, Del Guidice P and Vasilaki E (2020) Exploiting multiple timescales in hierarchical echo state networks, Frontiers in Applied Mathematics and Statistics.
Manneschi L, Lin AC and Vasilaki E (2021) SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations, arXiv.

See Eleni Vasilaki’s webpage here.




Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.
Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Half of AI health answers are wrong even though they sound convincing – new study

  12 May 2026
Imagine you have just been diagnosed with early-stage cancer and, before your next appointment, you type a question into an AI chatbot.

Gradient-based planning for world models at longer horizons

  11 May 2026
What were the problems that motivated this project and what was the approach to address them?

It’s tempting to offload your thinking to AI. Cognitive science shows why that’s a bad idea

  08 May 2026
Increased offloading to new tools has raised the fear that people will become overly reliant on AI.

Making AI systems more transparent and trustworthy: an interview with Ximing Wen

  07 May 2026
Find out more about Ximing's work, experience as a research intern, and what inspired her to study AI.

Report on foundation model impacts released

  06 May 2026
Partnership on AI publish a progress report on post-deployment governance practices.

Forthcoming machine learning and AI seminars: May 2026 edition

  05 May 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 5 May and 30 June 2026.

AI for Science – from cosmology to chemistry

  01 May 2026
How AI is transforming science, from a day conference at the Royal Society
monthly digest

AIhub monthly digest: April 2026 – machine learning for particle physics, AI Index Report, and table tennis

  30 Apr 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence