ΑΙhub.org
 

Interview with Eleni Vasilaki – talking bio-inspired machine learning

by
25 February 2021



share this:
Eleni Vasilaki

Eleni Vasilaki is Professor of Computational Neuroscience and Neural Engineering and Head of the Machine Learning Group in the Department of Computer Science, University of Sheffield. Eleni has extensive cross-disciplinary experience in understanding how brains learn, developing novel machine learning techniques and assisting in designing brain-like computation devices. In this interview, we talk about bio-inspired machine learning and artificial intelligence.

Could you tell us about your research area?

I am interested in bio-inspired machine learning. I enjoy theory and analysis of mathematically tractable systems, particularly they can be relevant for neuromorphic computation. One particular example is reservoir computing. It is loosely inspired by the brain, though it is an elegant model that uses randomness to simplify the problem to be solved. The reservoir per se is a collection of randomly connected neurons but can be replaced by any medium with sufficient complex dynamics that can be used for computation, e.g, a magnetic material. This line of work I jointly pursue with colleagues in Materials Science and Engineering. The reservoir requires very simple learning methods, and I am currently investigating how to make these methods more effective.

How did you develop an interest in neurocomputing and machine learning?

I have been fascinated by how biological organisms learn, and to me, understanding means that you can capture the principles in mathematical models or physical systems, hence my interest in bio-inspired machine learning and neuromorphic computing. My relationship with artificial neural networks started in 1999, and I started using them in the context of computational neuroscience. Soon after I discovered there was a community, building brain-inspired hardware, with members of which I have been collaborating and co-authored papers in the last 5 years.

Could you tell us what some of the implications of your research are?

My earlier work has shed light on the interpretation of connectivity motifs in the brain and in the mechanisms that shape them. My more recent work on biologically plausible reinforcement learning demonstrated, in the context of insect brains, that behaviours we perceive as complex may have very simple underlying principles. Ongoing work on reservoir computing with materials will hopefully lead to powerful learning methods for reservoirs and, in the more distant future, low power hardware designs.

Could you explain the relationship between insects and machine learning?

Insects process information and learn based on data they collect via experience, and, in a way, so do machine learning methods. However, insects, unlike our best machine learning techniques, are capable of a broad variety of tasks that they seem to perform reasonably well with very little computational power and very few neurons! As an example, the mushroom body of a fruit fly has about 2,500 neurons only (100K in the whole brain). Reservoir computing, for instance, could be seen as a simplified model of the mushroom body.

Biological inspiration is certainly not the only way to improve machine learning, but the most popular method at the moment, i.e. deep learning, is vaguely inspired by biological neural networks. There may be a lesson to learn from studying insects in terms of efficiency.

Could you explain how “active learning” relates to artificial intelligence?

In machine learning, “active learning” refers to the methods that query the user to label specific data, in order to guide the learning process. We might say that the algorithm knows what it wants to know. Similarly, active learning from the point of psychology refers to a learner who is not a passive receiver but interacts with the learning process. Insects do that as well, they interact with their environment and explore it. In realistic scenarios they are not simply passive learners and therefore how insects learn could be an inspiration for modern AI.

Do you have any recommendations for those who are interested in learning more about bio-inspired artificial intelligence?

I believe that understanding artificial neural networks/deep learning provides a good technical background, but one should realise that these tools, though fascinating and powerful, do not correspond faithfully to how the brain works. Fundamental courses in computational neuroscience and in particular synaptic plasticity are helpful. I would suggest the two books I used myself to get into the topic: Spiking Neural Networks (Gerstner and Kistler) and Theoretical Neuroscience (Dayan and Abbott).

References

On connectivity motifs

Clopath C, Buesing L, Vasilaki E, and Gerstner W (2010), Connectivity reflects Coding: A Model of Voltage-based Spike-Timing-Dependant-Plasticity with Homeostasis. Nature Neuroscience.
Vasilaki E, & Gugliano M (2014), Emergence of Connectivity Motifs in Networks of Model Neurons with Short- and Long-term Plastic Synapses, PLoS ONE.

On insect learning

Cope A, Vasilaki E, Minors D, Sabo C, Marshall JAR and Barron AB, (2018) Abstract concept learning in a simple neural network inspired by the insect brain. PLOS Computational Biology.

On reservoir computing

Manneschi L, Ellis MO, Gigante G, Lin AC, Del Guidice P and Vasilaki E (2020) Exploiting multiple timescales in hierarchical echo state networks, Frontiers in Applied Mathematics and Statistics.
Manneschi L, Lin AC and Vasilaki E (2021) SpaRCe: Improved Learning of Reservoir Computing Systems through Sparse Representations, arXiv.

See Eleni Vasilaki’s webpage here.




Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.
Anil Ozdemir is a postgraduate research associate in the Dept. of Computer Science at the University of Sheffield, UK.




            AIhub is supported by:


Related posts :



Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from AIhub, Robohub, and IEEE Spectrum.
13 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association