| Crates.io | oxgpu |
| lib.rs | oxgpu |
| version | 0.1.0 |
| created_at | 2025-12-20 16:15:20.926347+00 |
| updated_at | 2025-12-20 16:15:20.926347+00 |
| description | A lightweight GPU compute library built on wgpu |
| homepage | |
| repository | https://github.com/vulkanic-labs/oxgpu |
| max_upload_size | |
| id | 1996669 |
| size | 73,694 |
A lightweight GPU compute library built on wgpu, providing a simple and ergonomic API for GPU-accelerated computing.
Add this to your Cargo.toml:
[dependencies]
oxgpu = "0.1.0"
use oxgpu::{Context, Buffer};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create GPU context
let ctx = Context::new().await?;
// Create buffers
let data = vec![1.0f32, 2.0, 3.0, 4.0, 5.0];
let buffer = Buffer::from_slice(&ctx, &data).await;
// Read data back
let result = buffer.read(&ctx).await?;
println!("Result: {:?}", result);
Ok(())
}
use oxgpu::{Context, Buffer,ComputeKernel, BindingType, KernelBinding};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let ctx = Context::new().await?;
// Create input/output buffers
let x = Buffer::from_slice(&ctx, &[1.0f32, 2.0, 3.0]).await;
let y = Buffer::from_slice(&ctx, &[2.0f32, 4.0, 6.0]).await;
// WGSL shader
let shader = r#"
@group(0) @binding(0) var<storage, read> x: array<f32>;
@group(0) @binding(1) var<storage, read_write> y: array<f32>;
@compute @workgroup_size(64)
fn main(@builtin(global_invocation_id) id: vec3<u32>) {
y[id.x] = x[id.x] + y[id.x];
}
"#;
// Build and run kernel
let kernel = ComputeKernel::builder()
.source(shader)
.entry_point("main")
.bind(KernelBinding::new(0, BindingType::Storage { read_only: true }))
.bind(KernelBinding::new(1, BindingType::Storage { read_only: false }))
.build(&ctx)
.await?;
kernel.run(&ctx, (1, 1, 1), &[&x, &y]);
let result = y.read(&ctx).await?;
println!("Result: {:?}", result); // [3.0, 6.0, 9.0]
Ok(())
}
You can find more examples in the examples/ directory:
To run an example:
cargo run --example vector_add
Context: GPU context managing device and queueBuffer<T>: Typed GPU buffer for data storageBufferUsage: Flags for buffer usage (storage, uniform, etc.)ComputeKernel: Compiled compute shaderComputeKernelBuilder: Builder for creating compute kernelsBuffer::new() - Create buffer with custom usageBuffer::from_slice() - Create from sliceBuffer::zeros() - Create zero-initialized bufferbuffer.read() - Read data from GPUbuffer.write() - Write data to GPUContributions are welcome! Please feel free to submit a Pull Request.