ΑΙhub.org
 

Pretrained transformers as universal computation engines


by
29 April 2021



share this:
input-output token

By Kevin Lu, Aditya Grover, Pieter Abbeel, and Igor Mordatch

Transformers have been successfully applied to a wide variety of modalities: natural language, vision, protein modeling, music, robotics, and more. A common trend with using large models is to train a transformer on a large amount of training data, and then finetune it on a downstream task. This enables the models to utilize generalizable high-level embeddings trained on a large dataset to avoid overfitting to a small task-relevant dataset.

We investigate a new setting where instead of transferring the high-level embeddings, we instead transfer the intermediate computation modules – instead of pretraining on a large image dataset and finetuning on a small image dataset, we might instead pretrain on a large language dataset and finetune on a small image dataset. Unlike conventional ideas that suggest the attention mechanism is specific to the training modality, we find that the self-attention layers can generalize to other modalities without finetuning.

To illustrate this, we take a pretrained transformer language model and finetune it on various classification tasks: numerical computation, vision, and protein fold prediction. Then, we freeze all the self-attention blocks except for the layer norm parameters. Finally, we add a new linear input layer to read in the new type of input, and reinitialize a linear output layer to perform classification on the new task. We refer to this as “Frozen Pretrained Transformer”.

Across the tasks, a token fed to the model represents a small amount of information: for example, it could be a single bit, or a 4×4 image patch. In particular, the tokens can only communicate with each other via the self-attention mechanism, which is not being trained at all on the downstream task. We investigate if these mechanisms – learned exclusively from natural language data – can be used for another modality in zero shot.

We show test accuracies for a variety of tasks below. We FPT can match or improve the performance of training a transformer fully from scratch! This indicates that, somehow, the attention mechanisms are general enough that we can feed in relatively arbitrary inputs and still generate useful embeddings for downstream classification.



We also find that, when computing the elementwise XOR of two bitstrings, despite the self-attention parameters being frozen, by learning input embeddings to feed into the attention layer it is possible to force the self-attention to attend to the relevant bits for strings of length up to 256 (length of 5 shown below):



An open question is then what the benefit of pretraining on language is. Instead of initializing the transformer parameters from a pretrained model, we could instead initialize them randomly or by pretraining on the Bit Memory task, which ablate against no supervision or weak memory supervision, instead. Our results indicate that all three methods of initialization can work well, but language still performs the best, somehow providing an interesting set of pretrained layers: for example, on CIFAR-10, the base FPT model achieves an accuracy of 68%, versus 63% from Bit Memory pretraining or 62% from random initialization. Furthermore, we find the language-pretrained frozen transformers converge faster than the randomly initialized frozen transformers, typically by a factor of 1-4x, indicating that language might be a good starting point for other tasks.

We also find the transformer architecture itself to be very important. If we compare a randomly initialized frozen transformer to a randomly initialized frozen LSTM, the transformer significantly outperforms the LSTM: for example, 62% vs 34% on CIFAR-10. Thus, we think attention may already be a naturally good prior for multimodal generalization; we could think of self-attention as applying data-dependent filters.

We’re very interested in a better understanding of the capability of language models or hybrid-modality transformers for the goal of a universal computation engine. We think there are a lot of open questions to be explored in this space, and are excited to see new work in multimodal training.


This post is based on the following paper:

This article was initially published on the BAIR blog, and appears here with the authors’ permission.




BAIR blog




            AIhub is supported by:



Related posts :



#ECAI2025 – social media round up

  31 Oct 2025
Over the past week, researchers have gathered in Bologna for the 28th European Conference on Artificial Intelligence.
monthly digest

AIhub monthly digest: October 2025 – energy supply challenges, wearable sensors, and atomic-scale simulations

  29 Oct 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Winners of the #ECAI2025 outstanding paper awards announced

  28 Oct 2025
Find out which articless were selected as ECAI and PAIS outstanding papers.

The great wildebeest migration, seen from space: satellites and AI are helping count Africa’s wildlife

  27 Oct 2025
Researchers analysed satellite imagery of the Serengeti-Mara ecosystem from 2022 and 2023.

New AI tool helps match enzymes to substrates

  24 Oct 2025
A new machine learning-powered tool can help researchers determine how well an enzyme fits with a desired target.

#AIES2025 social media round-up

  24 Oct 2025
Find out what participants got up to at the Conference on Artificial Intelligence, Ethics, and Society.

Looking ahead to #ECAI2025

  23 Oct 2025
Find out what the programme has in store at the European Conference on AI.

Congratulations to the #AIES2025 best paper award winners!

  21 Oct 2025
The four winners of best paper prizes were announced during the opening ceremony at AIES.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence