ΑΙhub.org
 

GPT-4 + Stable-Diffusion = ?: Enhancing prompt understanding of text-to-image diffusion models with large language models


by
26 June 2023



share this:
comparison of LMD and stable diffusion using two examples, one with bananas on a table and the other with cats on the grass

By Long Lian, Boyi Li, Adam Yala and Trevor Darrell.

TL;DR: Text Prompt -> LLM -> Intermediate Representation (such as an image layout) -> Stable Diffusion -> Image.

Recent advancements in text-to-image generation with diffusion models have yielded remarkable results synthesizing highly realistic and diverse images. However, despite their impressive capabilities, diffusion models, such as Stable Diffusion, often struggle to accurately follow the prompts when spatial or common sense reasoning is required.

The following figure lists four scenarios in which Stable Diffusion falls short in generating images that accurately correspond to the given prompts, namely negation, numeracy, and attribute assignment, spatial relationships. In contrast, our method, LLM-grounded Diffusion (LMD), delivers much better prompt understanding in text-to-image generation in those scenarios.

Visualizations
Figure 1: LLM-grounded Diffusion enhances the prompt understanding ability of text-to-image diffusion models.

One possible solution to address this issue is of course to gather a vast multi-modal dataset comprising intricate captions and train a large diffusion model with a large language encoder. This approach comes with significant costs: It is time-consuming and expensive to train both large language models (LLMs) and diffusion models.

Our solution

To efficiently solve this problem with minimal cost (i.e., no training costs), we instead equip diffusion models with enhanced spatial and common sense reasoning by using off-the-shelf frozen LLMs in a novel two-stage generation process.

First, we adapt an LLM to be a text-guided layout generator through in-context learning. When provided with an image prompt, an LLM outputs a scene layout in the form of bounding boxes along with corresponding individual descriptions. Second, we steer a diffusion model with a novel controller to generate images conditioned on the layout. Both stages utilize frozen pretrained models without any LLM or diffusion model parameter optimization. We invite readers to read the paper on arXiv for additional details.

Text to layout
Figure 2: LMD is a text-to-image generative model with a novel two-stage generation process: a text-to-layout generator with an LLM + in-context learning and a novel layout-guided stable diffusion. Both stages are training-free.

LMD’s additional capabilities

Additionally, LMD naturally allows dialog-based multi-round scene specification, enabling additional clarifications and subsequent modifications for each prompt. Furthermore, LMD is able to handle prompts in a language that is not well-supported by the underlying diffusion model.

Additional abilities
Figure 3: Incorporating an LLM for prompt understanding, our method is able to perform dialog-based scene specification and generation from prompts in a language (Chinese in the example above) that the underlying diffusion model does not support.

Given an LLM that supports multi-round dialog (e.g., GPT-3.5 or GPT-4), LMD allows the user to provide additional information or clarifications to the LLM by querying the LLM after the first layout generation in the dialog and generate images with the updated layout in the subsequent response from the LLM. For example, a user could request to add an object to the scene or change the existing objects in location or descriptions (the left half of Figure 3).

Furthermore, by giving an example of a non-English prompt with a layout and background description in English during in-context learning, LMD accepts inputs of non-English prompts and will generate layouts, with descriptions of boxes and the background in English for subsequent layout-to-image generation. As shown in the right half of Figure 3, this allows generation from prompts in a language that the underlying diffusion models do not support.

Visualizations

We validate the superiority of our design by comparing it with the base diffusion model (SD 2.1) that LMD uses under the hood. We invite readers to our work for more evaluation and comparisons.

Main Visualizations
Figure 4: LMD outperforms the base diffusion model in accurately generating images according to prompts that necessitate both language and spatial reasoning. LMD also enables counterfactual text-to-image generation that the base diffusion model is not able to generate (the last row).

For more details about LLM-grounded Diffusion (LMD), visit our website and read the paper on arXiv.

BibTex

If LLM-grounded Diffusion inspires your work, please cite it with:

@article{lian2023llmgrounded,
title={LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models},
author={Lian, Long and Li, Boyi and Yala, Adam and Darrell, Trevor},
journal={arXiv preprint arXiv:2305.13655},
year={2023}
}


This article was initially published on the BAIR blog, and appears here with the authors’ permission.



tags:


BAIR blog




            AIhub is supported by:



Related posts :



Interview with Zahra Ghorrati: developing frameworks for human activity recognition using wearable sensors

  03 Oct 2025
Find out more about research developing scalable and adaptive deep learning frameworks.

Diffusion beats autoregressive in data-constrained settings

  03 Oct 2025
How can we trade off more compute for less data?

Forthcoming machine learning and AI seminars: October 2025 edition

  02 Oct 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 3 October and 30 November 2025.
monthly digest

AIhub monthly digest: September 2025 – conference reviewing, soccer ball detection, and memory traces

  30 Sep 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

Botanical time machines: AI is unlocking a treasure trove of data held in herbarium collections

  29 Sep 2025
New research describes the development and testing of a new AI-driven tool.

All creatures, great, small, and artificial

  26 Sep 2025
AI in Veterinary Medicine and what it can teach us about the data revolution.

RoboCup Logistics League: an interview with Alexander Ferrein, Till Hofmann and Wataru Uemura

  25 Sep 2025
Find out more about the RoboCup league focused on production logistics and the planning.

Data centers consume massive amounts of water – companies rarely tell the public exactly how much

  24 Sep 2025
Why do data centres need so much water, and how much do they use?



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence