| Crates.io | bevy_llm |
| lib.rs | bevy_llm |
| version | 0.2.0 |
| created_at | 2025-08-31 04:07:15.2772+00 |
| updated_at | 2025-09-01 02:19:13.34563+00 |
| description | bevy llm plugin (native + wasm) |
| homepage | https://github.com/mosure/bevy_llm |
| repository | https://github.com/mosure/bevy_llm |
| max_upload_size | |
| id | 1818146 |
| size | 228,922 |
bevy llm plugin (native + wasm). minimal wrapper over the llm crate that:
re-exports llm chat/types, so you don’t duplicate models
streams assistant deltas and tool-calls as Bevy events
keeps history inside the provider (optional sliding-window memory)
never blocks the main thread (tiny Tokio RT on native; async pool on wasm)
cargo add bevy_llm
# or in Cargo.toml: bevy_llm = "0.2"
Bevy plugin with non-blocking async chat
Structured streaming with coalesced deltas (~60hz or >=64 chars)
Fallback to one-shot chat when streaming unsupported
Tool-calls surfaced via ChatToolCallsEvt
Provider-managed memory with sliding_window_memory
Multiple providers via Providers + optional ChatSession.key
Native + wasm (wasm uses gloo-net)
Helper send_user_text() API
Built-in UI widgets
Persisted conversation storage
Additional backends convenience builders
use bevy::prelude::*;
use bevy_llm::{
BevyLlmPlugin, Providers, ChatSession, ChatDeltaEvt, ChatCompletedEvt,
LLMBackend, LLMBuilder, send_user_text,
};
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(BevyLlmPlugin)
.add_systems(Startup, setup)
// read chat events after the plugin drains its inbox
.add_systems(Update, on_events.after(bevy_llm::LlmSet::Drain))
.run();
}
fn setup(mut commands: Commands) {
// OpenAI-compatible backend (point base_url at your server)
let provider = LLMBuilder::new()
.backend(LLMBackend::OpenAI)
.base_url("https://api.openai.com/v1/responses")
.model("gpt-5")
.sliding_window_memory(16)
.build().expect("provider");
commands.insert_resource(Providers::new(provider.into()));
// start a streaming chat session and send a message
let session = commands.spawn(ChatSession { key: None, stream: true }).id();
send_user_text(&mut commands, session, "hello from bevy_llm!");
}
fn on_events(
mut deltas: EventReader<ChatDeltaEvt>,
mut done: EventReader<ChatCompletedEvt>,
){
for e in deltas.read() { println!("delta: {}", e.text); }
for e in done.read() { println!("final: {:?}", e.final_text); }
}
chat: simple text streaming UI with base url / key / model fieldstool: demonstrates parsing JSON-as-text and handling ChatToolCallsEvtrun (native):
# optional env for examples
export OPENAI_API_KEY=sk-...
export LLM_BASE_URL=https://api.openai.com
export LLM_MODEL=gpt-5
cargo run --example chat
cargo run --example tool
wasm is supported; integrate with your preferred bundler and target wasm32-unknown-unknown.
Configured via the upstream llm crate. This plugin works great with OpenAI‑compatible servers
(set base_url to your /v1/responses endpoint). Additional convenience builders may land later.
bevy_llm |
bevy |
|---|---|
0.2 |
0.16 |
licensed under either of
at your option.
unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.