ΑΙhub.org
 

An AI image generator for non-English speakers


by
17 March 2026



share this:

Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. This increases digital inequality for non-English speakers. Researchers at the University of Amsterdam Faculty of Science have created NeoBabel, an AI image generator that can work in six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research.

When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created. However, most people worldwide are not native English speakers, which puts them at a disadvantage.

Meanwhile, text-to-text generators can speak over 200 languages fluently. That’s why researchers from the UvA Informatics Institute teamed up with Cohere labs, a company specialised in text generation. The research team integrated an image generation system in these text generators, creating an advanced multilingual image generator. The image generator, named NeoBabel, currently supports six languages: English, French, Dutch, Chinese, Hindi, and Persian.

Completely open source

Most image generation models are built by a few large U.S. companies, who rarely reveal all the details of their model. Cees Snoek, full professor in computer science and part of the NeoBabel research team: “Usually, most of the work is closed source, so we cannot see exactly how the model works. We don’t know if there are biases in the data, how the system was created, and how it can be improved. This goes against our academic principles.”

In contrast, alongside a paper publication about NeoBabel, the research team has made all their code and data public. Mohammad Derakhshani, PhD student and first author of the paper: “Personally, I wanted to build a tool for scientific exploration, and for that you need the full research pipeline. We made the entire pipeline public, so anyone interested in this field has all the information they need.”

A table and a bear

NeoBabel performs as well as imaging models in English but easily outperforms them in the other five languages. Competing models first translate prompts to English, whereas NeoBabel generates images directly from multiple languages. Snoek explains: “Translations lose the nuances of language and culture, because many words lack good English equivalents.” An example of such a mistranslation can be seen below, where the prompt requested an image of a table and a bear.

The prompt requested in Dutch an image of a table and a bear. In Dutch, a bear is a “beer”, which confuses most image generators.

The researchers also improved the labeling of the data used to train the AI model. They used multilingual language models to translate image labels into multiple languages and made those labels more descriptive. Snoek: “This allows us to train our model in all these languages simultaneously. For each language, it learns the connection between the words and the pixels.”

By improving the data, the AI model is also smaller than other competing models – in technical terms, it has fewer parameters. Additionally, the researchers expanded the publicly available dataset of image-label pairs from 40 million to 124 million. Derakhshani: “This amount of data is usually not publicly accessible. We scaled up the dataset massively, even though we had limited computational power.”

Towards video

NeoBabel opens up a wide range of applications, including a multilingual creative canvas. On this digital canvas, multiple users can “paint” on the same image, each using their own language. Derakhshani explains: “If I only speak Persian and you only speak Dutch, we can co-create an image without using English. You might generate the first version in Dutch, and I can then mark a region and describe the changes in Persian. The model adapts the image accordingly.”

According to Snoek, the next step for NeoBabel is creating culturally specific images. However, this requires culture-specific data as well as greater computational power. “We could accomplish much more with a more substantial computational infrastructure,” Snoek says. “These AI models don’t have to come from large industry labs. The creativity is here, but we lack the resources to demonstrate it.”

The researchers are therefore seeking collaboration partners. In the long term, they would like to expand NeoBabel to video creation. Snoek: “My dream would be for it to be able to generate videos as well. There is a large television archive in Hilversum, “Beeld en Geluid”. It would be really great to collaborate with them to generate Dutch cultural videos.”

Find out more




University of Amsterdam

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.

2026 AI Index Report released

  15 Apr 2026
Find out what the ninth edition of the report, which was published on 13 April, says about trends in AI.

Formal verification for safety evaluation of autonomous vehicles: an interview with Abdelrahman Sayed Sayed

  14 Apr 2026
Find out more about work at the intersection of continuous AI models, formal methods, and autonomous systems.

Water flow in prairie watersheds is increasingly unpredictable — but AI could help

  13 Apr 2026
In recent years, the Prairies have seen bigger swings in climate conditions — very wet years followed by very dry ones.

Identifying interactions at scale for LLMs

  10 Apr 2026
Model behavior is rarely the result of isolated components; rather, it emerges from complex dependencies and patterns.

Interview with Sukanya Mandal: Synthesizing multi-modal knowledge graphs for smart city intelligence

  09 Apr 2026
A modular four-stage framework that draws on LLMs to automate synthetic multi-modal knowledge graphs.

Emergence of fragility in LLM-based social networks: an interview with Francesco Bertolotti

  08 Apr 2026
Francesco tells us how LLMs behave in the social network Moltbook, and what this reveals about network dynamics.

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence