ΑΙhub.org
 

Diffusion model approach tackles aspect ratio problem in generative AI images


by
20 September 2024



share this:

The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Photo of an athlete cat explaining its latest scandal at a press conference to journalists.” (Image courtesy of Moayed Haji Ali/Rice University.)

By John Bogna

Generative artificial intelligence (AI) has notoriously struggled to create consistent images, often getting details like fingers and facial symmetry wrong. Moreover, these models can completely fail when prompted to generate images at different image sizes and resolutions.

Rice University computer scientists’ new method of generating images with pre-trained diffusion models ⎯ a class of generative AI models that “learn” by adding layer after layer of random noise to the images they are trained on and then generate new images by removing the added noise ⎯ could help correct such issues.

Moayed Haji Ali, a Rice University computer science doctoral student, described the new approach, called ElasticDiffusion, in a peer-reviewed paper presented at the Institute of Electrical and Electronics Engineers (IEEE) 2024 Conference on Computer Vision and Pattern Recognition (CVPR) in Seattle.

Moayed Haji Ali is a Rice University computer science doctoral student. (Photo by Vicente Ordóñez-Román/Rice University.)

“Diffusion models like Stable Diffusion, Midjourney, and DALL-E create impressive results, generating fairly lifelike and photorealistic images,” Haji Ali said. “But they have a weakness: They can only generate square images. So, in cases where you have different aspect ratios, like on a monitor or a smartwatch … that’s where these models become problematic.”

If you tell a model like Stable Diffusion to create a non-square image, say a 16:9 aspect ratio, the elements used to build the generated image gets repetitive. That repetition shows up as strange-looking deformities in the image or image subjects, like people with six fingers or a strangely elongated car.

The way these models are trained also contributes to the issue.

“If you train the model on only images that are a certain resolution, they can only generate images with that resolution,” said Vicente Ordóñez-Román, an associate professor of computer science who advised Haji Ali on his work alongside Guha Balakrishnan, assistant professor of electrical and computer engineering.

Ordóñez-Román explained that this is a problem endemic to AI known as overfitting, where an AI model becomes excessively good at generating data similar to what it was trained on, but cannot deviate far outside those parameters.

“You could solve that by training the model on a wider variety of images, but it’s expensive and requires massive amounts of computing power ⎯ hundreds, maybe even thousands of graphics processing units,” Ordóñez-Román said.

Moayed Haji Ali, a Rice University computer science doctoral student, presents his work presents his poster at CVPR. (Photo by Vicente Ordóñez-Román/Rice University).

According to Haji Ali, the digital noise used by diffusion models can be translated into a signal with two data types: local and global. The local signal contains pixel-level detail information like the shape of an eye or the texture of a dog’s fur. The global signal contains more of an overall outline of the image.

“One reason diffusion models need help with non-square aspect ratios is that they usually package local and global information together,” said Haji Ali, who worked on synthesizing motion in AI-generated videos before joining Ordóñez-Román’s research group at Rice for his Ph.D. studies. “When the model tries to duplicate that data to account for the extra space in a non-square image, it results in visual imperfections.”

The ElasticDiffusion method in Haji Ali’s paper takes a different approach to creating an image. Instead of packaging both signals together, ElasticDiffusion separates the local and global signals into conditional and unconditional generation paths. It subtracts the conditional model from the unconditional model, obtaining a score which contains global image information.

After that, the unconditional path with the local pixel-level detail is applied to the image in quadrants, filling in the details one square at a time. The global information ⎯ what the image aspect ratio should be and what the image is (a dog, a person running, etc.) ⎯ remains separate, so there is no chance of the AI confusing the signals and repeating data. The result is a cleaner image regardless of the aspect ratio that does not need additional training.

The picture on the left was generated by a standard method while the picture on the right was generated by ElasticDiffusion. The prompt for both images was, “Envision a portrait of a cute scientist owl in blue and gray outfit announcing their latest breakthrough discovery. His eyes are light brown. His attire is simple yet dignified”. (Image courtesy of Moayed Haji Ali/Rice University.)

“This approach is a successful attempt to leverage the intermediate representations of the model to scale them up so that you get global consistency,” Ordóñez-Román said.

The only drawback to ElasticDiffusion relative to other diffusion models is time. Currently, it takes up to 6-9 times as long for Haji Ali’s method to make an image. The goal is to reduce that to the same inference time as other models like Stable Diffusion or DALL-E.

“Where I’m hoping that this research is going is to define…why diffusion models generate these more repetitive parts and can’t adapt to these changing aspect ratios and come up with a framework that can adapt to exactly any aspect ratio regardless of the training, at the same inference time,” said Haji Ali.

Find out more




Rice University




            AIhub is supported by:


Related posts :



Generative AI is already being used in journalism – here’s how people feel about it

  21 Feb 2025
New report draws on three years of interviews and focus group research into generative AI and journalism

Charlotte Bunne on developing AI-based diagnostic tools

  20 Feb 2025
To advance modern medicine, EPFL researchers are developing AI-based diagnostic tools. Their goal is to predict the best treatment a patient should receive.

What’s coming up at #AAAI2025?

  19 Feb 2025
Find out what's on the programme at the 39th Annual AAAI Conference on Artificial Intelligence

An introduction to science communication at #AAAI2025

  18 Feb 2025
Find out more about our forthcoming training session at AAAI on 26 February 2025.

The Good Robot podcast: Critiquing tech through comedy with Laura Allcorn

  17 Feb 2025
Eleanor and Kerry chat to Laura Allcorn about how she pairs humour and entertainment with participatory public engagement to raise awareness of AI use cases

Interview with Kayla Boggess: Explainable AI for more accessible and understandable technologies

  14 Feb 2025
Hear from Doctoral Consortium participant Kayla about her work focussed on explanations for multi-agent reinforcement learning, and human-centric explanations.

The Machine Ethics podcast: Running faster with Enrico Panai

This episode, Ben chats to Enrico Panai about different aspects of AI ethics.

Diffusion model predicts 3D genomic structures

  12 Feb 2025
A new approach predicts how a specific DNA sequence will arrange itself in the cell nucleus.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association