bevy_ort

Crates.iobevy_ort
lib.rsbevy_ort
version0.12.8
sourcesrc
created_at2024-03-08 23:06:43.027872
updated_at2024-05-06 02:35:47.603264
descriptionbevy ort (onnxruntime) plugin
homepagehttps://github.com/mosure/bevy_ort
repositoryhttps://github.com/mosure/bevy_ort
max_upload_size
id1167440
size322,509
Mitchell Mosure (mosure)

documentation

README

bevy_ort 🪨

test GitHub License crates.io

a bevy plugin for the ort library

person mask

> modnet inference example

capabilities

  • load ONNX models as ORT session assets
  • initialize ORT with default execution providers
  • modnet bevy image <-> ort tensor IO (with feature modnet)
  • batched modnet preprocessing
  • compute task pool inference scheduling

models

  • lightglue (feature matching)

  • modnet (photographic portrait matting)

  • yolo_v8 (object detection)

  • flame (parametric head model)

library usage

use bevy::prelude::*;

use bevy_ort::{
    BevyOrtPlugin,
    models::flame::{
        FlameInput,
        FlameOutput,
        Flame,
        FlamePlugin,
    },
};


fn main() {
    App::new()
        .add_plugins((
            DefaultPlugins,
            BevyOrtPlugin,
            FlamePlugin,
        ))
        .add_systems(Startup, load_flame)
        .add_systems(Startup, setup)
        .add_systems(Update, on_flame_output)
        .run();
}


fn load_flame(
    asset_server: Res<AssetServer>,
    mut flame: ResMut<Flame>,
) {
    flame.onnx = asset_server.load("models/flame.onnx");
}


fn setup(
    mut commands: Commands,
) {
    commands.spawn(FlameInput::default());
    commands.spawn(Camera3dBundle::default());
}


#[derive(Debug, Component, Reflect)]
struct HandledFlameOutput;

fn on_flame_output(
    mut commands: Commands,
    flame_outputs: Query<
        (
            Entity,
            &FlameOutput,
        ),
        Without<HandledFlameOutput>,
    >,
) {
    for (entity, flame_output) in flame_outputs.iter() {
        commands.entity(entity)
            .insert(HandledFlameOutput);

        println!("{:?}", flame_output);
    }
}

run the example person segmentation model (modnet)

cargo run

use an accelerated execution provider:

  • windows - cargo run --features ort/cuda or cargo run --features ort/openvino
  • macos - cargo run --features ort/coreml
  • linux - cargo run --features ort/tensorrt or cargo run --features ort/openvino

see complete list of ort features here: https://github.com/pykeio/ort/blob/0aec4030a5f3470e4ee6c6f4e7e52d4e495ec27a/Cargo.toml#L54

note: if you use pip install onnxruntime, you may need to run ORT_STRATEGY=system cargo run, see: https://docs.rs/ort/latest/ort/#how-to-get-binaries

compatible bevy versions

bevy_ort bevy
0.1.0 0.13

credits

license

This software is dual-licensed under the MIT License and the GNU General Public License version 3 (GPL-3.0).

You may choose to use this software under the terms of the MIT License OR the GNU General Public License version 3 (GPL-3.0), except as stipulated below:

The use of the yolo_v8 feature within this software is specifically governed by the GNU General Public License version 3 (GPL-3.0). By using the yolo_v8 feature, you agree to comply with the terms and conditions of the GPL-3.0.

For more details on the licenses, please refer to the LICENSE.MIT and LICENSE.GPL-3.0 files included with this software.

Commit count: 62

cargo fmt