ΑΙhub.org
 

New sparse RNN architecture applied to autonomous vehicle control


by
26 October 2020



share this:
LTC RNNs
The network in action – steering an autonomous car. Image is a screenshot from a video created by the authors: Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu

Researchers from TU Wien, IST Austria and MIT have developed a recurrent neural network (RNN) method for application to specific tasks within an autonomous vehicle control system. What is interesting about this architecture is that it uses just a small number of neurons. This smaller scale allows for a greater level of generalization and interpretability compared with systems containing orders of magnitude more neurons.

The researchers found that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learnt to map high-dimensional inputs into steering commands. This was achieved by use of a liquid time-constant RNN, a concept that they introduced in 2018. Liquid time-constant (LTC) RNNs are a subclass of continuous-time RNNs, with a varying neuronal time-constant.

“The processing of the signals within the individual cells follows different mathematical principles than previous deep learning models,” noted Ramin Hasani (TU Wien and MIT CSAIL). “Also, our networks are highly sparse – this means that not every cell is connected to every other cell. This also makes the network simpler.”

“Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving,” said Mathias Lechner (IST Austria). “However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters.”

You can watch the algorithm in action in this short video put together by the team:

The system works as follows: firstly, the camera input is processed by a convolutional neural network (CNN). This network decides which parts of the camera image are interesting and important, and then passes signals to the crucial part of the network – the RNN-based “control system” (as described above) that then steers the vehicle.

Both parts of the system can be trained simultaneously. The training was carried out by feeding many hours of traffic videos into the network, together with information on how to steer the car in a given situation. Through this training, the system learnt the appropriate steering reaction depending on a particular situation.

“Our model allows us to investigate what the network focuses its attention on while driving. Our networks focus on very specific parts of the camera picture: the curbside and the horizon. This behaviour is highly desirable, and it is unique among artificial intelligence systems,” said Ramin Hasani. “Moreover, we saw that the role of every single cell at any driving decision can be identified. We can understand the function of individual cells and their behaviour. Achieving this degree of interpretability is impossible for larger deep learning models.”

Find out more

The published article:
Neural circuit policies enabling auditable autonomy, Mathias Lechner, Ramin Hasani, Alexander Amini, Thomas A. Henzinger, Daniela Rus & Radu Grosu.

GitHub code repository

Google colab tutorial where you can find out how to build three recurrent neural networks based on the LTC model

Google colab showing how to stack NCPs with other layers

arXiv article introducing the notion of liquid time-constant RNNs:
Liquid time-constant recurrent neural networks as universal approximators, Ramin M. Hasani, Mathias Lechner, Alexander Amini, Daniela Rus, and Radu Grosu.




Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :

3 Questions: Using AI to help Olympic skaters land a quint

  16 Feb 2026
Researchers are applying AI technologies to help figure skaters improve. They also have thoughts on whether five-rotation jumps are humanly possible.

AAAI presidential panel – AI and sustainability

  13 Feb 2026
Watch the next discussion based on sustainability, one of the topics covered in the AAAI Future of AI Research report.

How can robots acquire skills through interactions with the physical world? An interview with Jiaheng Hu

  12 Feb 2026
Find out more about work published at the Conference on Robot Learning (CoRL).

From Visual Question Answering to multimodal learning: an interview with Aishwarya Agrawal

and   11 Feb 2026
We hear from Aishwarya about research that received a 2019 AAAI / ACM SIGAI Doctoral Dissertation Award honourable mention.

Governing the rise of interactive AI will require behavioral insights

  10 Feb 2026
Yulu Pi writes about her work that was presented at the conference on AI, ethics and society (AIES 2025).

AI is coming to Olympic judging: what makes it a game changer?

  09 Feb 2026
Research suggests that trust, legitimacy, and cultural values may matter just as much as technical accuracy.

Sven Koenig wins the 2026 ACM/SIGAI Autonomous Agents Research Award

  06 Feb 2026
Sven honoured for his work on AI planning and search.

Congratulations to the #AAAI2026 award winners

  05 Feb 2026
Find out who has won the prestigious 2026 awards for their contributions to the field.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence