Differential Diffusion: Giving Each Pixel Its Strength

1Tel Aviv University, 2Reichman University

Differential Diffusion modifies an image according to a text prompt, and according to a map that specifies the amount of change in each region.

Input

Map

Output

“whimsical illustration of a rainbow...”

Abstract

Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting—the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study.

Input

Map

Output

“tree of life under the sea...”

Input

Map

Output

“palace above the clouds...”

Input

Map

Output

“3d depth outer space nebulae background...”

Input

Map

Output

“fantasy art...”

Discrete Editing

Use Differential Diffusion to edit a picture with different amount of change for each region.

Input

Map

Output

“race car video game”

Continuous Editing

Apply Differential Diffusion to edit a picture with a continuous range of change amounts.

Input

Map

Output

“painting of dream worlds...”

Other Diffusion Models

Our framework can be ported to other diffusion models.

Input

Map

Output

Stable Diffusion XL

Input

Map

Output

DeepFloyd IF

Input

Map

Output

Kandinsky

The prompts are: “cow”, “feathers”, “sheepskin”.

Input

Map

Output

The prompt is: “very beautiful insanely detailed image of tsunami in golden spring...”. Edited with Stable Diffusion XL.

Input

Map

Output

The prompt is: “autumn best quality, ink painting, acrylic...”. Edited with Stable Diffusion XL.

Soft Inpaint

Our framework can be used to implement soft inpainting — the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Mind the regions that are adjacent to the border.

Input

Inpaint Mask

Output

Input

Inpaint Mask

Output

Input

Inpaint Mask

Output

The prompts are “peacock, realism”, “Gustave Courbet”, “Camille Monet”, the softening radius is: 64px

Comparison

Input

Inpaint Mask

No Softening

α-compose

Poisson

Laplace

Standard Softening

Ours

The prompts are “Gustave Courbet”, “peacock, realism”, “Camille Monet”, the softening radius is: 64px

Strength Fans

Use Differential Diffusion to tune the exact strength you want to apply in an edit by observing and comparing the effects of different strengths.

post-apocalyptic

Enigmatic abstract patterns

BibTeX

@misc{levin2023differential,
      title={Differential Diffusion: Giving Each Pixel Its Strength},
      author={Eran Levin and Ohad Fried},
      year={2023},
      eprint={2306.00950},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}