This article discusses setting up the local environment on M1/M2/M3 Macs to run SDXL Turbo for real-time text-to-image and image-to-image generation.
SDXL Turbo offers incredibly fast processing speeds that allow for creative workflows and applications. By following the steps here, you can get SDXL Turbo running locally in just a few minutes.
With the correct environment set up, you'll be ready to start experimenting with lightning fast image generation using SDXL Turbo right on your own Mac!
To set up an environment for running SDXL Turbo locally, we will need to go through a few configuration steps:
First, create a virtual environment to install the required libraries isolated from the base system:
python -m venv env
Activate the virtual env:
. ./env/bin/activate
Install/upgrade pip and then use it to install libraries:
pip install --upgrade pip
pip install jupyter notebook
pip install diffusers transformers accelerate --upgrade
With the libraries in place, launch a notebook environment such as Jupyter to run the code.
You can launch Jupyter from the terminal/command line:
jupyter notebook
Or via an IDE like VSCode.
The notebook will serve as your runtime environment for executing the SDXL Turbo scripts.
Once running, you can begin writing and running code cells to try out text-to-image or image-to-image generation powered by this cutting edge AI system!
With the environment set up and notebook running, we can now execute SDXL Turbo scripts.
Let's start with a text-to-image generation example:
from diffusers import AutoPipelineForText2Image
import torch
pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float32, variant="fp32")
pipe.to("mps")
prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."
image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
The key aspects are:
In just 1 second, this generates a 1024x1024 image matching the text description!
We can also run image-to-image workflows:
from diffusers import AutoPipelineForImage2Image
from diffusers.utils import load_image
import torch
pipe = AutoPipelineForImage2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float32, variant="fp32")
init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png").resize((512, 512))
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipe(prompt, image=init_image, num_inference_steps=2, strength=0.5, guidance_scale=0.0).images[0]
Here we:
The model intelligently modifies the input image per the prompt in just 2 seconds!
You now have the basis for building all kinds of creative workflows and applications with SDXL Turbo!
In this article, we successfully covered how to set up and run SDXL Turbo locally on a Mac for ultra-fast image generation.
All within your existing Mac infrastructure without the need for cloud resources.
Reference link:
https://github.com/Stability-AI/generative-models
https://huggingface.co/stabilityai/sdxl-turbo
https://stability.ai/research/adversarial-diffusion-distillation