Crates.io | pyke-diffusers |
lib.rs | pyke-diffusers |
version | 0.2.0 |
source | src |
created_at | 2022-12-16 03:24:15.148116 |
updated_at | 2023-01-10 17:33:18.285482 |
description | modular Rust library for optimized Stable Diffusion inference 🔮 |
homepage | |
repository | https://github.com/pykeio/diffusers |
max_upload_size | |
id | 738434 |
size | 255,367 |
pyke Diffusers is a modular Rust library for pretrained diffusion model inference to generate images, videos, or audio, using ONNX Runtime as a backend for extremely optimized generation on both CPU & GPU.
a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).
You'll need Rust v1.62.1+ to use pyke Diffusers.
Only generic CPU, CUDA, and TensorRT have prebuilt binaries available. Other execution providers will require you to manually build them; see the ONNX Runtime docs for more info. Additionally, you'll need to make ort
link to your custom-built binaries.
Note: By default, the LMS scheduler is not enabled, and this section can simply be skipped.
If you plan to enable the all-schedulers
or scheduler-lms
feature, you will need to install binaries for the GNU Scientific Library. See the installation instructions for rust-GSL
to set up GSL.
[dependencies]
pyke-diffusers = "0.1"
# if you'd like to use CUDA:
pyke-diffusers = { version = "0.1", features = [ "ort-cuda" ] }
The default features enable some commonly used schedulers and pipelines.
use pyke_diffusers::{
Environment, EulerDiscreteScheduler, SchedulerOptimizedDefaults, StableDiffusionOptions, StableDiffusionPipeline,
StableDiffusionTxt2ImgOptions
};
let environment = Arc::new(Environment::builder().build()?);
let mut scheduler = EulerDiscreteScheduler::stable_diffusion_v1_optimized_default()?;
let pipeline = StableDiffusionPipeline::new(&environment, "./stable-diffusion-v1-5", &StableDiffusionOptions::default())?;
let imgs = pipeline.txt2img("photo of a red fox", &mut scheduler, &StableDiffusionTxt2ImgOptions::default())?;
imgs[0].clone().into_rgb8().save("result.png")?;
pyke-diffusers
includes an interactive Stable Diffusion demo. Run it with:
$ cargo run --example stable-diffusion-interactive --features ort-cuda -- ~/path/to/stable-diffusion/
See examples/
for more examples and the docs for more detailed information..
pyke Diffusers currently supports Stable Diffusion v1, v2, and its derivatives.
To convert a model from a HuggingFace diffusers
model:
python3 -m pip install torch --extra-index-url https://download.pytorch.org/whl/cu116
python3 -m pip install -r requirements.txt
huggingface-cli login
- this can be skipped if you have the model on diskscripts/hf2pyke.py
:
python3 scripts/hf2pyke.py runwayml/stable-diffusion-v1-5 ~/pyke-diffusers-sd15/
python3 scripts/hf2pyke.py ~/stable-diffusion-v1-5/ ~/pyke-diffusers-sd15/
python3 scripts/hf2pyke.py --fp16 runwayml/stable-diffusion-v1-5@fp16 ~/pyke-diffusers-sd15-fp16/
python3 scripts/hf2pyke.py --fp16 ~/stable-diffusion-v1-5-fp16/ ~/pyke-diffusers-sd15-fp16/
float16 models are faster on some GPUs and use less memory. However, it should be noted that, if you are using float16 models for GPU inference, they must be converted on the hardware they will be run on due to an ONNX Runtime bug. CPUs using float16 models should not have this issue however.
hf2pyke
supports a few options to improve performance or ORT execution provider compatibility. See python3 scripts/hf2pyke.py --help
.
When running the examples in this repo on Windows, you'll need to copy the onnxruntime*
dylibs from target/debug/
to target/debug/examples/
on first run. You'll also need to copy the dylibs to target/debug/deps/
if your project uses pyke Diffusers in a Cargo test.
CUDA is the only alternative execution provider available with no setup required. Simply enable pyke Diffusers' ort-cuda
feature and enable DiffusionDevice::CUDA
; see the docs or the stable-diffusion
example for more info. You may need to rebuild your project for ort
to copy the libraries again.
For other EPs like DirectML or oneDNN, you'll need to build ONNX Runtime from source. See ort
's notes on execution providers.
Lower resolution generations require less memory usage.
A StableDiffusionMemoryOptimizedPipeline
exists for environments with low memory. This pipeline removes the safety checker and will only load models when they are required and unloads them immediately after. This will heavily impact performance and should only be used in extreme cases.
In extremely constrained environments (e.g. <= 4GB RAM), it is also possible to produce a quantized int8 model. The int8 model's quality is heavily impacted, but faster and less memory intensive on CPUs.
To convert an int8 model:
$ python3 scripts/hf2pyke.py --quantize=ut ~/stable-diffusion-v1-5/ ~/pyke-diffusers-sd15-quantized/
--quantize=ut
will quantize only the UNet and text encoder using uint8 mode for best quality and performance. You can choose to convert the other models using the following format:
u
for UNet, v
for VAE, and t
for text encoder.Typically, uint8 is higher quality and faster, but you can play around with the settings to see if quality or speed improves.
A combination of 256x256 image generation via StableDiffusionMemoryOptimizedPipeline
with a uint8 UNet only requires 1.3 GB of memory usage.