What is it about?

Given an input image and a mask, Blended Latent Diffusion modifies the masked area according to a guiding text prompt, without affecting the unmasked regions. Blended Latent Diffusion aims to offer a solution for the task of local text-driven editing of generic images that was introduced in Blended Diffusion paper. Blended Diffusion suffered from a slow inference time (getting a good result requires about 25 minutes on a single GPU) and pixel-level artifacts.

Featured Image

Why is it important?

The tremendous progress in neural image generation, coupled with the emergence of seemingly omnipotent vision-language models has finally enabled text-based interfaces for creating and editing images. Handling generic images requires a diverse underlying generative model, hence the latest works utilize diffusion models, which were shown to surpass GANs in terms of diversity. One major drawback of diffusion models, however, is their relatively slow inference time. In this paper, we present an accelerated solution to the task of local text-driven editing of generic images, where the desired edits are confined to a user-provided mask. Our solution leverages a recent text-to-image Latent Diffusion Model (LDM), which speeds up diffusion by operating in a lower-dimensional latent space. We first convert the LDM into a local image editor by incorporating Blended Diffusion into it. Next we propose an optimization-based solution for the inherent inability of this LDM to accurately reconstruct images. Finally, we address the scenario of performing local edits using thin masks. We evaluate our method against the available baselines both qualitatively and quantitatively and demonstrate that in addition to being faster, our method achieves better precision than the baselines while mitigating some of their artifacts.

Read the Original

This page is a summary of: Blended Latent Diffusion, ACM Transactions on Graphics, July 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3592450.
You can read the full text:

Read

Contributors

The following have contributed to this page