| Crates.io | bevy-sensor |
| lib.rs | bevy-sensor |
| version | 0.4.6 |
| created_at | 2025-12-13 21:30:46.71474+00 |
| updated_at | 2025-12-20 16:52:41.719329+00 |
| description | Bevy library for capturing multi-view images of 3D OBJ models (YCB dataset) for sensor simulation |
| homepage | https://github.com/killerapp/bevy-sensor |
| repository | https://github.com/killerapp/bevy-sensor |
| max_upload_size | |
| id | 1983432 |
| size | 406,360 |
A Rust library and CLI for capturing multi-view images (RGBA + Depth) of 3D objects, specifically designed for the Thousand Brains Project sensor simulation.
This crate serves as the visual sensor module for the neocortx project, providing TBP-compatible sensor data (64x64 resolution, specific camera intrinsics) from YCB dataset models.
just (recommended command runner).Install Just (Optional but recommended):
cargo install just
Run a Test Render:
just render-single 003_cracker_box
# Models will be automatically downloaded to /tmp/ycb if missing.
# To use a custom location: cargo run --bin prerender -- --data-dir ./my_models ...
# Output saved to test_fixtures/renders/
Render the standard TBP benchmark set (10 objects):
just render-tbp-benchmark
Render specific objects:
just render-batch "003_cracker_box,005_tomato_soup_can"
Add to your Cargo.toml:
[dependencies]
bevy-sensor = "0.4"
Use in your code:
use bevy_sensor::{render_to_buffer, RenderConfig, ViewpointConfig, ObjectRotation};
use std::path::Path;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// 1. Configure
let config = RenderConfig::tbp_default(); // 64x64, TBP intrinsics
let viewpoint = bevy_sensor::generate_viewpoints(&ViewpointConfig::default())[0];
let rotation = ObjectRotation::identity();
let object_path = Path::new("/tmp/ycb/003_cracker_box");
// 2. Render to memory (RGBA + Depth)
let output = render_to_buffer(object_path, &viewpoint, &rotation, &config)?;
println!("Captured {}x{} image", output.width, output.height);
Ok(())
}
WSL2 does not support native Vulkan window surfaces well. This project defaults to the WebGPU backend on WSL2, which works reliably for headless rendering.
If you absolutely have no GPU, you can try software rendering (slow, potential artifacts):
LIBGL_ALWAYS_SOFTWARE=1 GALLIUM_DRIVER=llvmpipe cargo run --release
MIT