Image generation with Stable Diffusion XL and OpenVINO

This Jupyter notebook can be launched after a local installation only.

Github

Stable Diffusion XL or SDXL is the latest image generation model that is tailored towards more photorealistic outputs with more detailed imagery and composition compared to previous Stable Diffusion models, including Stable Diffusion 2.1.

With Stable Diffusion XL you can now make more realistic images with improved face generation, produce legible text within images, and create more aesthetically pleasing art using shorter prompts.

pipeline

pipeline

SDXL consists of an ensemble of experts pipeline for latent diffusion: In the first step, the base model is used to generate (noisy) latents, which are then further processed with a refinement model specialized for the final denoising steps. Note that the base model can be used as a standalone module or in a two-stage pipeline as follows: First, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model and apply a technique called SDEdit( also known as “image to image”) to the latents generated in the first step, using the same prompt.

Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. The authors design multiple novel conditioning schemes and train SDXL on multiple aspect ratios and also introduce a refinement model that is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. The testing of SDXL shows drastically improved performance compared to the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators.

In this tutorial, we consider how to run the SDXL model using OpenVINO.

We will use a pre-trained model from the Hugging Face Diffusers library. To simplify the user experience, the Hugging Face Optimum Intel library is used to convert the models to OpenVINO™ IR format.

The tutorial consists of the following steps:

Note: Some demonstrated models can require at least 64GB RAM for conversion and running.

Table of contents:

Install prerequisites

%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu "torch>=2.1" "diffusers>=0.18.0" "invisible-watermark>=0.2.0" "transformers>=4.33.0" "accelerate" "onnx" "peft==0.6.2"
%pip install -q "git+https://github.com/huggingface/optimum-intel.git"
%pip install -q "openvino>=2023.1.0" "gradio>=4.19" "nncf>=2.9.0"

SDXL Base model

We will start with the base model part, which is responsible for the generation of images of the desired output size. stable-diffusion-xl-base-1.0 is available for downloading via the HuggingFace hub. It already provides a ready-to-use model in OpenVINO format compatible with Optimum Intel.

To load an OpenVINO model and run an inference with OpenVINO Runtime, you need to replace diffusers StableDiffusionXLPipeline with Optimum OVStableDiffusionXLPipeline. In case you want to load a PyTorch model and convert it to the OpenVINO format on the fly, you can set export=True.

You can save the model on disk using the save_pretrained method.

from pathlib import Path
from optimum.intel.openvino import OVStableDiffusionXLPipeline
import gc

model_id = "stabilityai/stable-diffusion-xl-base-1.0"
model_dir = Path("openvino-sd-xl-base-1.0")

Select inference device SDXL Base model

select device from dropdown list for running inference using OpenVINO

import ipywidgets as widgets
import openvino as ov

core = ov.Core()

device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],
    value="AUTO",
    description="Device:",
    disabled=False,
)

device
Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')

Please select below whether you would like to use weight compression to reduce memory footprint. Optimum Intel supports weight compression via NNCF out of the box. For 8-bit compression we provide quantization_config=OVWeightQuantizationConfig(bits=8, ...) argument to from_pretrained() method containing number of bits and other compression parameters.

compress_weights = widgets.Checkbox(
    description="Apply weight compression",
    value=True,
)

compress_weights
Checkbox(value=True, description='Apply weight compression')
def get_quantization_config(compress_weights):
    quantization_config = None
    if compress_weights.value:
        from optimum.intel import OVWeightQuantizationConfig

        quantization_config = OVWeightQuantizationConfig(bits=8)
    return quantization_config


quantization_config = get_quantization_config(compress_weights)
if not model_dir.exists():
    text2image_pipe = OVStableDiffusionXLPipeline.from_pretrained(model_id, compile=False, device=device.value, quantization_config=quantization_config)
    text2image_pipe.half()
    text2image_pipe.save_pretrained(model_dir)
    text2image_pipe.compile()
else:
    text2image_pipe = OVStableDiffusionXLPipeline.from_pretrained(model_dir, device=device.value)
INFO:nncf:Statistics of the bitwidth distribution:
+--------------+---------------------------+-----------------------------------+
| Num bits (N) | % all parameters (layers) |    % ratio-defining parameters    |
|              |                           |             (layers)              |
+==============+===========================+===================================+
| 8            | 100% (794 / 794)          | 100% (794 / 794)                  |
+--------------+---------------------------+-----------------------------------+
Output()
INFO:nncf:Statistics of the bitwidth distribution:
+--------------+---------------------------+-----------------------------------+
| Num bits (N) | % all parameters (layers) |    % ratio-defining parameters    |
|              |                           |             (layers)              |
+==============+===========================+===================================+
| 8            | 100% (32 / 32)            | 100% (32 / 32)                    |
+--------------+---------------------------+-----------------------------------+
Output()
INFO:nncf:Statistics of the bitwidth distribution:
+--------------+---------------------------+-----------------------------------+
| Num bits (N) | % all parameters (layers) |    % ratio-defining parameters    |
|              |                           |             (layers)              |
+==============+===========================+===================================+
| 8            | 100% (40 / 40)            | 100% (40 / 40)                    |
+--------------+---------------------------+-----------------------------------+
Output()
INFO:nncf:Statistics of the bitwidth distribution:
+--------------+---------------------------+-----------------------------------+
| Num bits (N) | % all parameters (layers) |    % ratio-defining parameters    |
|              |                           |             (layers)              |
+==============+===========================+===================================+
| 8            | 100% (74 / 74)            | 100% (74 / 74)                    |
+--------------+---------------------------+-----------------------------------+
Output()
INFO:nncf:Statistics of the bitwidth distribution:
+--------------+---------------------------+-----------------------------------+
| Num bits (N) | % all parameters (layers) |    % ratio-defining parameters    |
|              |                           |             (layers)              |
+==============+===========================+===================================+
| 8            | 100% (195 / 195)          | 100% (195 / 195)                  |
+--------------+---------------------------+-----------------------------------+
Output()
Compiling the vae_decoder to AUTO ...
Compiling the unet to AUTO ...
Compiling the vae_encoder to AUTO ...
Compiling the text_encoder to AUTO ...
Compiling the text_encoder_2 to AUTO ...

Run Text2Image generation pipeline

Now, we can run the model for the generation of images using text prompts. To speed up evaluation and reduce the required memory we decrease num_inference_steps and image size (using height and width). You can modify them to suit your needs and depend on the target hardware. We also specified a generator parameter based on a numpy random state with a specific seed for results reproducibility.

import numpy as np

prompt = "cute cat 4k, high-res, masterpiece, best quality, soft lighting, dynamic angle"
image = text2image_pipe(
    prompt,
    num_inference_steps=15,
    height=512,
    width=512,
    generator=np.random.RandomState(314),
).images[0]
image.save("cat.png")
image
0%|          | 0/15 [00:00<?, ?it/s]
../_images/stable-diffusion-xl-with-output_13_1.png

Text2image Generation Interactive Demo

import gradio as gr

if text2image_pipe is None:
    text2image_pipe = OVStableDiffusionXLPipeline.from_pretrained(model_dir, device=device.value)

prompt = "cute cat 4k, high-res, masterpiece, best quality, soft lighting, dynamic angle"


def generate_from_text(text, seed, num_steps):
    result = text2image_pipe(
        text,
        num_inference_steps=num_steps,
        generator=np.random.RandomState(seed),
        height=512,
        width=512,
    ).images[0]
    return result


with gr.Blocks() as demo:
    with gr.Column():
        positive_input = gr.Textbox(label="Text prompt")
        with gr.Row():
            seed_input = gr.Number(precision=0, label="Seed", value=42, minimum=0)
            steps_input = gr.Slider(label="Steps", value=10)
            btn = gr.Button()
        out = gr.Image(label="Result", type="pil", width=512)
        btn.click(generate_from_text, [positive_input, seed_input, steps_input], out)
        gr.Examples(
            [
                [prompt, 999, 20],
                [
                    "underwater world coral reef, colorful jellyfish, 35mm, cinematic lighting, shallow depth of field,  ultra quality, masterpiece, realistic",
                    89,
                    20,
                ],
                [
                    "a photo realistic happy white poodle dog ​​playing in the grass, extremely detailed, high res, 8k, masterpiece, dynamic angle",
                    1569,
                    15,
                ],
                [
                    "Astronaut on Mars watching sunset, best quality, cinematic effects,",
                    65245,
                    12,
                ],
                [
                    "Black and white street photography of a rainy night in New York, reflections on wet pavement",
                    48199,
                    10,
                ],
            ],
            [positive_input, seed_input, steps_input],
        )

# if you are launching remotely, specify server_name and server_port
# demo.launch(server_name='your server name', server_port='server port in int')
# Read more in the docs: https://gradio.app/docs/
# if you want create public link for sharing demo, please add share=True
demo.launch()
demo.close()
text2image_pipe = None
gc.collect();

Run Image2Image generation pipeline

We can reuse the already converted model for running the Image2Image generation pipeline. For that, we should replace OVStableDiffusionXLPipeline with OVStableDiffusionXLImage2ImagePipeline.

Select inference device SDXL Refiner model

select device from dropdown list for running inference using OpenVINO

device
Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')
from optimum.intel import OVStableDiffusionXLImg2ImgPipeline

image2image_pipe = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(model_dir, device=device.value)
Compiling the vae_decoder to AUTO ...
Compiling the unet to AUTO ...
Compiling the vae_encoder to AUTO ...
Compiling the text_encoder_2 to AUTO ...
Compiling the text_encoder to AUTO ...
photo_prompt = "professional photo of a cat, extremely detailed, hyper realistic, best quality, full hd"
photo_image = image2image_pipe(
    photo_prompt,
    image=image,
    num_inference_steps=25,
    generator=np.random.RandomState(356),
).images[0]
photo_image.save("photo_cat.png")
photo_image
0%|          | 0/7 [00:00<?, ?it/s]
../_images/stable-diffusion-xl-with-output_21_1.png
import gradio as gr
from diffusers.utils import load_image
import numpy as np


load_image("https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/sd_xl/castle_friedrich.png").resize((512, 512)).save(
    "castle_friedrich.png"
)


if image2image_pipe is None:
    image2image_pipe = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(model_dir)


def generate_from_image(text, image, seed, num_steps):
    result = image2image_pipe(
        text,
        image=image,
        num_inference_steps=num_steps,
        generator=np.random.RandomState(seed),
    ).images[0]
    return result


with gr.Blocks() as demo:
    with gr.Column():
        positive_input = gr.Textbox(label="Text prompt")
        with gr.Row():
            seed_input = gr.Number(precision=0, label="Seed", value=42, minimum=0)
            steps_input = gr.Slider(label="Steps", value=10)
            btn = gr.Button()
        with gr.Row():
            i2i_input = gr.Image(label="Input image", type="pil")
            out = gr.Image(label="Result", type="pil", width=512)
        btn.click(
            generate_from_image,
            [positive_input, i2i_input, seed_input, steps_input],
            out,
        )
        gr.Examples(
            [
                ["amazing landscape from legends", "castle_friedrich.png", 971, 60],
                [
                    "Masterpiece of watercolor painting in Van Gogh style",
                    "cat.png",
                    37890,
                    40,
                ],
            ],
            [positive_input, i2i_input, seed_input, steps_input],
        )

# if you are launching remotely, specify server_name and server_port
# demo.launch(server_name='your server name', server_port='server port in int')
# Read more in the docs: https://gradio.app/docs/
# if you want create public link for sharing demo, please add share=True
demo.launch()
demo.close()
del image2image_pipe
gc.collect()

SDXL Refiner model

As we discussed above, Stable Diffusion XL can be used in a 2-stages approach: first, the base model is used to generate latents of the desired output size. In the second step, we use a specialized high-resolution model for the refinement of latents generated in the first step, using the same prompt. The Stable Diffusion XL Refiner model is designed to transform regular images into stunning masterpieces with the help of user-specified prompt text. It can be used to improve the quality of image generation after the Stable Diffusion XL Base. The refiner model accepts latents produced by the SDXL base model and text prompt for improving generated image.

select whether you would like to use weight compression to reduce memory footprint

compress_weights
quantization_config = get_quantization_config(compress_weights)
from optimum.intel import (
    OVStableDiffusionXLImg2ImgPipeline,
    OVStableDiffusionXLPipeline,
)
from pathlib import Path

refiner_model_id = "stabilityai/stable-diffusion-xl-refiner-1.0"
refiner_model_dir = Path("openvino-sd-xl-refiner-1.0")


if not refiner_model_dir.exists():
    refiner = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(refiner_model_id, export=True, compile=False, quantization_config=quantization_config)
    refiner.half()
    refiner.save_pretrained(refiner_model_dir)
    del refiner
    gc.collect()

Select inference device

select device from dropdown list for running inference using OpenVINO

device
Dropdown(description='Device:', index=4, options=('CPU', 'GPU.0', 'GPU.1', 'GPU.2', 'AUTO'), value='AUTO')

Run Text2Image generation with Refinement

import numpy as np
import gc

model_dir = Path("openvino-sd-xl-base-1.0")
base = OVStableDiffusionXLPipeline.from_pretrained(model_dir, device=device.value)
prompt = "cute cat 4k, high-res, masterpiece, best quality, soft lighting, dynamic angle"
latents = base(
    prompt,
    num_inference_steps=15,
    height=512,
    width=512,
    generator=np.random.RandomState(314),
    output_type="latent",
).images[0]

del base
gc.collect()
Compiling the vae_decoder to AUTO ...
Compiling the unet to AUTO ...
Compiling the text_encoder to AUTO ...
Compiling the text_encoder_2 to AUTO ...
Compiling the vae_encoder to AUTO ...
0%|          | 0/15 [00:00<?, ?it/s]
294
refiner = OVStableDiffusionXLImg2ImgPipeline.from_pretrained(refiner_model_dir, device=device.value)
Compiling the vae_decoder to AUTO ...
Compiling the unet to AUTO ...
Compiling the text_encoder_2 to AUTO ...
Compiling the vae_encoder to AUTO ...
image = refiner(
    prompt=prompt,
    image=np.transpose(latents[None, :], (0, 2, 3, 1)),
    num_inference_steps=15,
    generator=np.random.RandomState(314),
).images[0]
image.save("cat_refined.png")

image
0%|          | 0/4 [00:00<?, ?it/s]
../_images/stable-diffusion-xl-with-output_35_1.png