ΑΙhub.org
 

GPT-4 + Stable-Diffusion = ?: Enhancing prompt understanding of text-to-image diffusion models with large language models


by
26 June 2023



share this:
comparison of LMD and stable diffusion using two examples, one with bananas on a table and the other with cats on the grass

By Long Lian, Boyi Li, Adam Yala and Trevor Darrell.

TL;DR: Text Prompt -> LLM -> Intermediate Representation (such as an image layout) -> Stable Diffusion -> Image.

Recent advancements in text-to-image generation with diffusion models have yielded remarkable results synthesizing highly realistic and diverse images. However, despite their impressive capabilities, diffusion models, such as Stable Diffusion, often struggle to accurately follow the prompts when spatial or common sense reasoning is required.

The following figure lists four scenarios in which Stable Diffusion falls short in generating images that accurately correspond to the given prompts, namely negation, numeracy, and attribute assignment, spatial relationships. In contrast, our method, LLM-grounded Diffusion (LMD), delivers much better prompt understanding in text-to-image generation in those scenarios.

Visualizations
Figure 1: LLM-grounded Diffusion enhances the prompt understanding ability of text-to-image diffusion models.

One possible solution to address this issue is of course to gather a vast multi-modal dataset comprising intricate captions and train a large diffusion model with a large language encoder. This approach comes with significant costs: It is time-consuming and expensive to train both large language models (LLMs) and diffusion models.

Our solution

To efficiently solve this problem with minimal cost (i.e., no training costs), we instead equip diffusion models with enhanced spatial and common sense reasoning by using off-the-shelf frozen LLMs in a novel two-stage generation process.

First, we adapt an LLM to be a text-guided layout generator through in-context learning. When provided with an image prompt, an LLM outputs a scene layout in the form of bounding boxes along with corresponding individual descriptions. Second, we steer a diffusion model with a novel controller to generate images conditioned on the layout. Both stages utilize frozen pretrained models without any LLM or diffusion model parameter optimization. We invite readers to read the paper on arXiv for additional details.

Text to layout
Figure 2: LMD is a text-to-image generative model with a novel two-stage generation process: a text-to-layout generator with an LLM + in-context learning and a novel layout-guided stable diffusion. Both stages are training-free.

LMD’s additional capabilities

Additionally, LMD naturally allows dialog-based multi-round scene specification, enabling additional clarifications and subsequent modifications for each prompt. Furthermore, LMD is able to handle prompts in a language that is not well-supported by the underlying diffusion model.

Additional abilities
Figure 3: Incorporating an LLM for prompt understanding, our method is able to perform dialog-based scene specification and generation from prompts in a language (Chinese in the example above) that the underlying diffusion model does not support.

Given an LLM that supports multi-round dialog (e.g., GPT-3.5 or GPT-4), LMD allows the user to provide additional information or clarifications to the LLM by querying the LLM after the first layout generation in the dialog and generate images with the updated layout in the subsequent response from the LLM. For example, a user could request to add an object to the scene or change the existing objects in location or descriptions (the left half of Figure 3).

Furthermore, by giving an example of a non-English prompt with a layout and background description in English during in-context learning, LMD accepts inputs of non-English prompts and will generate layouts, with descriptions of boxes and the background in English for subsequent layout-to-image generation. As shown in the right half of Figure 3, this allows generation from prompts in a language that the underlying diffusion models do not support.

Visualizations

We validate the superiority of our design by comparing it with the base diffusion model (SD 2.1) that LMD uses under the hood. We invite readers to our work for more evaluation and comparisons.

Main Visualizations
Figure 4: LMD outperforms the base diffusion model in accurately generating images according to prompts that necessitate both language and spatial reasoning. LMD also enables counterfactual text-to-image generation that the base diffusion model is not able to generate (the last row).

For more details about LLM-grounded Diffusion (LMD), visit our website and read the paper on arXiv.

BibTex

If LLM-grounded Diffusion inspires your work, please cite it with:

@article{lian2023llmgrounded,
title={LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models},
author={Lian, Long and Li, Boyi and Yala, Adam and Darrell, Trevor},
journal={arXiv preprint arXiv:2305.13655},
year={2023}
}


This article was initially published on the BAIR blog, and appears here with the authors’ permission.



tags:


BAIR blog

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Scaling up multi-agent systems: an interview with Minghong Geng

  07 Apr 2026
We sat down with Minghong in the latest of our interviews with the 2026 AAAI/SIGAI Doctoral Consortium participants.

Forthcoming machine learning and AI seminars: April 2026 edition

  02 Apr 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 April and 31 May 2026.

#AAAI2026 invited talk: machine learning for particle physics

  01 Apr 2026
How is ML used in the search for new particles at CERN?
monthly digest

AIhub monthly digest: March 2026 – time series, multiplicity, and the history of RoboCup

  31 Mar 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

What I’ve learned from 25 years of automated science, and what the future holds: an interview with Ross King

  30 Mar 2026
We launch our new series with a conversation with Ross King - a pioneer in the field of AI-enabled scientific discovery.

A multi-armed robot for assisting with agricultural tasks

and   27 Mar 2026
How can a robot safely manipulate branches to reveal hidden flowers while remaining aware of interaction forces and minimizing damage?

Resource-constrained image generation and visual understanding: an interview with Aniket Roy

  26 Mar 2026
Aniket tells us about his research exploring how modern generative models can be adapted to operate efficiently while maintaining strong performance.

RWDS Big Questions: how do we highlight the role of statistics in AI?

  25 Mar 2026
Next in our series, the panel explores the statistical underpinning of AI.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence