Adversarial diffusion distillation (ADD) is a novel training approach that efficiently distills a pretrained diffusion model into a fast image generator that can produce high-fidelity samples in just 1-4 steps.
It combines two training objectives
Enables real-time high resolution image generation, unlocking the possibility to generate 1024x1024 images in a fraction of a second with latest GPU hardware.
Significantly outperforms GANs and other distillation techniques in the low sampling step regime of 1-2 steps.
Matches or exceeds the sample quality of state-of-the-art diffusion models like SD and SDXL with only 4 sampling steps.
Retains the ability to iteratively refine samples by taking more sampling steps.
Brings high-fidelity foundation model capabilities to real-time applications.
Opens up new creative possibilities requiring quick iteration.
Could expand access tocapable generative models by reducing sampling compute requirements.
In summary, adversarial diffusion distillation unlocks efficient distillation of large diffusion models into extremely fast yet high-quality single-step image generators, enabling real-time synthesis while retaining iterative refinability. Let me know if any part needs more explanation!