ΑΙhub.org
 

An AI image generator for non-English speakers


by
17 March 2026



share this:

Although text-to-image generation is rapidly advancing, these AI models are mostly English-centric. This increases digital inequality for non-English speakers. Researchers at the University of Amsterdam Faculty of Science have created NeoBabel, an AI image generator that can work in six different languages. By making all elements of their research open source, anyone can build on the model and help push inclusive AI research.

When you generate an image with AI, the results are often better when your prompt is in English. This is because many AI models are English at their core: if you use another language, your prompt is translated into English before the image is created. However, most people worldwide are not native English speakers, which puts them at a disadvantage.

Meanwhile, text-to-text generators can speak over 200 languages fluently. That’s why researchers from the UvA Informatics Institute teamed up with Cohere labs, a company specialised in text generation. The research team integrated an image generation system in these text generators, creating an advanced multilingual image generator. The image generator, named NeoBabel, currently supports six languages: English, French, Dutch, Chinese, Hindi, and Persian.

Completely open source

Most image generation models are built by a few large U.S. companies, who rarely reveal all the details of their model. Cees Snoek, full professor in computer science and part of the NeoBabel research team: “Usually, most of the work is closed source, so we cannot see exactly how the model works. We don’t know if there are biases in the data, how the system was created, and how it can be improved. This goes against our academic principles.”

In contrast, alongside a paper publication about NeoBabel, the research team has made all their code and data public. Mohammad Derakhshani, PhD student and first author of the paper: “Personally, I wanted to build a tool for scientific exploration, and for that you need the full research pipeline. We made the entire pipeline public, so anyone interested in this field has all the information they need.”

A table and a bear

NeoBabel performs as well as imaging models in English but easily outperforms them in the other five languages. Competing models first translate prompts to English, whereas NeoBabel generates images directly from multiple languages. Snoek explains: “Translations lose the nuances of language and culture, because many words lack good English equivalents.” An example of such a mistranslation can be seen below, where the prompt requested an image of a table and a bear.

The prompt requested in Dutch an image of a table and a bear. In Dutch, a bear is a “beer”, which confuses most image generators.

The researchers also improved the labeling of the data used to train the AI model. They used multilingual language models to translate image labels into multiple languages and made those labels more descriptive. Snoek: “This allows us to train our model in all these languages simultaneously. For each language, it learns the connection between the words and the pixels.”

By improving the data, the AI model is also smaller than other competing models – in technical terms, it has fewer parameters. Additionally, the researchers expanded the publicly available dataset of image-label pairs from 40 million to 124 million. Derakhshani: “This amount of data is usually not publicly accessible. We scaled up the dataset massively, even though we had limited computational power.”

Towards video

NeoBabel opens up a wide range of applications, including a multilingual creative canvas. On this digital canvas, multiple users can “paint” on the same image, each using their own language. Derakhshani explains: “If I only speak Persian and you only speak Dutch, we can co-create an image without using English. You might generate the first version in Dutch, and I can then mark a region and describe the changes in Persian. The model adapts the image accordingly.”

According to Snoek, the next step for NeoBabel is creating culturally specific images. However, this requires culture-specific data as well as greater computational power. “We could accomplish much more with a more substantial computational infrastructure,” Snoek says. “These AI models don’t have to come from large industry labs. The creativity is here, but we lack the resources to demonstrate it.”

The researchers are therefore seeking collaboration partners. In the long term, they would like to expand NeoBabel to video creation. Snoek: “My dream would be for it to be able to generate videos as well. There is a large television archive in Hilversum, “Beeld en Geluid”. It would be really great to collaborate with them to generate Dutch cultural videos.”

Find out more




University of Amsterdam

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

AI and Theory of Mind: an interview with Nitay Alon

  16 Mar 2026
Find out more about how Theory of Mind plays out in deceptive environments, multi-agents systems, the interdisciplinary nature of this field, when to use Theory of Mind, and when not to, and more.
coffee corner

AIhub coffee corner: AI, kids, and the future – “generation AI”

  13 Mar 2026
The AIhub coffee corner captures the musings of AI experts over a short conversation.

AI chatbots can effectively sway voters – in either direction

  12 Mar 2026
A short interaction with a chatbot can meaningfully shift a voter’s opinion about a presidential candidate or proposed policy.

Studying the properties of large language models: an interview with Maxime Meyer

  11 Mar 2026
What happens when you increase the prompt length in a LLM? In the latest interview in our AAAI Doctoral Consortium series, we sat down with Maxime, a PhD student in Singapore.

What the Moltbook experiment is teaching us about AI

An experimental social media platform where only AI bots can post reveals surprising lessons about artificial intelligence behaviour and safety.

The malleable mind: context accumulation drives LLM’s belief drift

  09 Mar 2026
LLMs change their "beliefs" over time, depending on the data they are given.

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence