ΑΙhub.org
 

Diffusion model approach tackles aspect ratio problem in generative AI images


by
20 September 2024



share this:

The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Photo of an athlete cat explaining its latest scandal at a press conference to journalists.” (Image courtesy of Moayed Haji Ali/Rice University.)

By John Bogna

Generative artificial intelligence (AI) has notoriously struggled to create consistent images, often getting details like fingers and facial symmetry wrong. Moreover, these models can completely fail when prompted to generate images at different image sizes and resolutions.

Rice University computer scientists’ new method of generating images with pre-trained diffusion models ⎯ a class of generative AI models that “learn” by adding layer after layer of random noise to the images they are trained on and then generate new images by removing the added noise ⎯ could help correct such issues.

Moayed Haji Ali, a Rice University computer science doctoral student, described the new approach, called ElasticDiffusion, in a peer-reviewed paper presented at the Institute of Electrical and Electronics Engineers (IEEE) 2024 Conference on Computer Vision and Pattern Recognition (CVPR) in Seattle.

Moayed Haji Ali is a Rice University computer science doctoral student. (Photo by Vicente Ordóñez-Román/Rice University.)

“Diffusion models like Stable Diffusion, Midjourney, and DALL-E create impressive results, generating fairly lifelike and photorealistic images,” Haji Ali said. “But they have a weakness: They can only generate square images. So, in cases where you have different aspect ratios, like on a monitor or a smartwatch … that’s where these models become problematic.”

If you tell a model like Stable Diffusion to create a non-square image, say a 16:9 aspect ratio, the elements used to build the generated image gets repetitive. That repetition shows up as strange-looking deformities in the image or image subjects, like people with six fingers or a strangely elongated car.

The way these models are trained also contributes to the issue.

“If you train the model on only images that are a certain resolution, they can only generate images with that resolution,” said Vicente Ordóñez-Román, an associate professor of computer science who advised Haji Ali on his work alongside Guha Balakrishnan, assistant professor of electrical and computer engineering.

Ordóñez-Román explained that this is a problem endemic to AI known as overfitting, where an AI model becomes excessively good at generating data similar to what it was trained on, but cannot deviate far outside those parameters.

“You could solve that by training the model on a wider variety of images, but it’s expensive and requires massive amounts of computing power ⎯ hundreds, maybe even thousands of graphics processing units,” Ordóñez-Román said.

Moayed Haji Ali, a Rice University computer science doctoral student, presents his work presents his poster at CVPR. (Photo by Vicente Ordóñez-Román/Rice University).

According to Haji Ali, the digital noise used by diffusion models can be translated into a signal with two data types: local and global. The local signal contains pixel-level detail information like the shape of an eye or the texture of a dog’s fur. The global signal contains more of an overall outline of the image.

“One reason diffusion models need help with non-square aspect ratios is that they usually package local and global information together,” said Haji Ali, who worked on synthesizing motion in AI-generated videos before joining Ordóñez-Román’s research group at Rice for his Ph.D. studies. “When the model tries to duplicate that data to account for the extra space in a non-square image, it results in visual imperfections.”

The ElasticDiffusion method in Haji Ali’s paper takes a different approach to creating an image. Instead of packaging both signals together, ElasticDiffusion separates the local and global signals into conditional and unconditional generation paths. It subtracts the conditional model from the unconditional model, obtaining a score which contains global image information.

After that, the unconditional path with the local pixel-level detail is applied to the image in quadrants, filling in the details one square at a time. The global information ⎯ what the image aspect ratio should be and what the image is (a dog, a person running, etc.) ⎯ remains separate, so there is no chance of the AI confusing the signals and repeating data. The result is a cleaner image regardless of the aspect ratio that does not need additional training.

The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Envision a portrait of a cute scientist owl in blue and gray outfit announcing their latest breakthrough discovery. His eyes are light brown. His attire is simple yet dignified”. (Image courtesy of Moayed Haji Ali/Rice University.)

“This approach is a successful attempt to leverage the intermediate representations of the model to scale them up so that you get global consistency,” Ordóñez-Román said.

The only drawback to ElasticDiffusion relative to other diffusion models is time. Currently, it takes up to 6-9 times as long for Haji Ali’s method to make an image. The goal is to reduce that to the same inference time as other models like Stable Diffusion or DALL-E.

“Where I’m hoping that this research is going is to define…why diffusion models generate these more repetitive parts and can’t adapt to these changing aspect ratios and come up with a framework that can adapt to exactly any aspect ratio regardless of the training, at the same inference time,” said Haji Ali.

Find out more




Rice University

            AUAI is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

Maryna Viazovska’s proofs of sphere packing formalized with AI

  27 Apr 2026
Formalization achieved through a collaboration between mathematicians and artificial intelligence tools.

Interview with Deepika Vemuri: interpretability and concept-based learning

  24 Apr 2026
Find out more about Deepika's research bridging the gap between data-driven models and symbolic learning.

As a ‘book scientist’ I work with microscopes, imaging technologies and AI to preserve ancient texts

  23 Apr 2026
Using an array of technologies to recover, understand and preserve many valuable ancient texts.

Sony AI table tennis robot outplays elite human players

  22 Apr 2026
New robot and AI system has beaten professional and elite table tennis players.

Causal models for decision systems: an interview with Matteo Ceriscioli

  21 Apr 2026
How can we integrate causal knowledge into agents or decision systems to make them more reliable?

A model for defect identification in materials

  20 Apr 2026
A new model measures defects that can be leveraged to improve materials’ mechanical strength, heat transfer, and energy-conversion efficiency.

‘Probably’ doesn’t mean the same thing to your AI as it does to you

  17 Apr 2026
Are you sure you and the AI chatbot you’re using are on the same page about probabilities?

Interview with Xinwei Song: strategic interactions in networked multi-agent systems

  16 Apr 2026
Xinwei Song tells us about her research using algorithmic game theory and multi-agent reinforcement learning.



AUAI is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence