Input
Map
Output
“tree of life under the sea...”
Input
Map
Output
“palace above the clouds...”
Input
Map
Output
“3d depth outer space nebulae background...”
Input
Map
Output
“fantasy art...”
Input
Map
Output
“whimsical illustration of a rainbow...”
Input
Map
Output
“tree of life under the sea...”
Input
Map
Output
“palace above the clouds...”
Input
Map
Output
“3d depth outer space nebulae background...”
Input
Map
Output
“fantasy art...”
Input
Map
Output
“whimsical illustration of a rainbow...”
Diffusion models have revolutionized image generation and editing, producing state-of-the-art results in conditioned and unconditioned image synthesis. While current techniques enable user control over the degree of change in an image edit, the controllability is limited to global changes over an entire edited region. This paper introduces a novel framework that enables customization of the amount of change per pixel or per image region. Our framework can be integrated into any existing diffusion model, enhancing it with this capability. Such granular control on the quantity of change opens up a diverse array of new editing capabilities, such as control of the extent to which individual objects are modified, or the ability to introduce gradual spatial changes. Furthermore, we showcase the framework's effectiveness in soft-inpainting—the completion of portions of an image while subtly adjusting the surrounding areas to ensure seamless integration. Additionally, we introduce a new tool for exploring the effects of different change quantities. Our framework operates solely during inference, requiring no model training or fine-tuning. We demonstrate our method with the current open state-of-the-art models, and validate it via both quantitative and qualitative comparisons, and a user study.
Use Differential Diffusion to edit a picture with different amount of change for each region.
Input
Map
Output
“race car video game”
Apply Differential Diffusion to edit a picture with a continuous range of change amounts.
Input
Map
Output
“painting of dream worlds...”
Input
Map
Output
Stable Diffusion XL
Input
Map
Output
DeepFloyd IF
Input
Map
Output
Kandinsky
The prompts are: “cow”, “feathers”, “sheepskin”.
Input
Map
Output
The prompt is: “very beautiful insanely detailed image of tsunami in golden spring...”. Edited with Stable Diffusion XL.
Input
Map
Output
The prompt is: “autumn best quality, ink painting, acrylic...”. Edited with Stable Diffusion XL.
Input
Inpaint Mask
Output
Input
Inpaint Mask
Output
Input
Inpaint Mask
Output
The prompts are “peacock, realism”, “Gustave Courbet”, “Camille Monet”, the softening radius is: 64px
Input
Inpaint Mask
No Softening
α-compose
Poisson
Laplace
Standard Softening
Ours
The prompts are “Gustave Courbet”, “peacock, realism”, “Camille Monet”, the softening radius is: 64px
Use Differential Diffusion to tune the exact strength you want to apply in an edit by observing and comparing the effects of different strengths.
post-apocalyptic
Enigmatic abstract patterns
@misc{levin2023differential,
title={Differential Diffusion: Giving Each Pixel Its Strength},
author={Eran Levin and Ohad Fried},
year={2023},
eprint={2306.00950},
archivePrefix={arXiv},
primaryClass={cs.CV}
}