| Crates.io | images_and_words |
| lib.rs | images_and_words |
| version | 0.1.0 |
| created_at | 2025-09-05 12:50:58.774782+00 |
| updated_at | 2025-09-05 12:50:58.774782+00 |
| description | GPU middleware and abstraction layer for high-performance graphics applications and games |
| homepage | https://sealedabstract.com/code/images_and_words |
| repository | https://github.com/drewcrawford/images_and_words |
| max_upload_size | |
| id | 1825472 |
| size | 18,626,596 |
GPU middleware and abstraction layer for high-performance graphics applications and games.
images_and_words provides a practical middle ground between low-level GPU APIs and full game engines, offering higher-order GPU resource types optimized for common patterns while maintaining the flexibility to bring your own physics, sound, and game logic.


The examples are cross-compiled to WebAssembly and run in the browser.
Suppose you want to write a game or graphics application. You may consider:
Wouldn't it be nice to have a middle ground? Here's how images_and_words compares:
| Strategy | Examples | API style | API concepts | Synchronization concerns | Shaders | Runtime size | Platform support | Development speed | Runtime speed |
|---|---|---|---|---|---|---|---|---|---|
| Game engine | Unity, Unreal, Godot | Scene-based | Scene, nodes, camera, materials | Low | Mostly built-in; programmability varies | Massive | Excellent | Very high | Depends on how similar you are to optimized use cases |
| Low-level APIs | DX, Vulkan, Metal | Pass-based | Passes, shaders, buffers, textures | High | BYO, extremely customizable | None | Poor; write once, run once | Very low | Extreme |
| Layered implementations | MoltenVK, Proton, wgpu | Pass-based | Passes, shaders, buffers, textures | High | BYO, customizable in theory; translation causes issues | Some | Good in theory; varies in practice | Very low | Excellent on native platforms; varies on translated platforms |
| Constructed APIs | WebGPU | Pass-based | Passes, shaders, buffers, textures | Medium-high | BYO, customizable, though many features stuck in committee | It's complicated | Some browser support, some translation support | Medium-low | Good |
| GPU middleware | images_and_words | Pass-based | Passes, shaders, camera, higher-order buffers and textures, multibuffering, common patterns | Medium-low | BYO, inherit from backends | Some | Good in theory; varies in practice | Medium | Good |
GPU middleware occupies a unique and overlooked niche in the ecosystem. It provides a cross-platform abstraction over GPU hardware, while also allowing you to bring your own sound, physics, accessibility, and your entire existing codebase to the table. These are the main advantages of the middleware category as a whole.
Beyond the pros and cons of GPU middleware as a category, images_and_words is specifically the dream GPU API I wanted in a career as a high-performance graphics application developer. Often, the motivation for GPU acceleration is we have some existing CPU code that we think is too slow, and we consider some ways to improve it including GPU acceleration, but that might take a week to prototype on one platform. The major #1 goal of IW is to prototytpe GPU acceleration on various platforms in a day or two at most.
A second major design goal is that eventually, you are likely to hit a second performance wall. It should be easy to reason about the performance of IW applications, and to optimize its primitives to meet your needs. IW is designed to be a practical and performant target for my own career of applications, and I hope it can be for yours as well.
The main innovation of images_and_words is providing a family of higher-order buffer and texture types. These types are layered atop traditional GPU resources but are optimized for specific use cases, with built-in multibuffering and synchronization to prevent pipeline stalls.
images_and_words organizes GPU resources along three orthogonal axes, allowing you to select the precise abstraction for your use case:
Buffers provide:
CRepr traitTextures provide:
Static resources:
Dynamic resources:
| Direction | Flow | Use Cases | Status |
|---|---|---|---|
| Forward | CPU→GPU | Rendering data, textures, uniforms | ✅ Implemented |
| Reverse | GPU→CPU | Screenshots, compute results, queries | ⏳ Planned |
| Sideways | GPU→GPU | Render-to-texture, compute chains | ⏳ Planned |
| Omnidirectional | CPU↔GPU | Interactive simulations, feedback | ⏳ Planned |
To select the appropriate binding type:
| Your Use Case | Recommended Type |
|---|---|
| Mesh geometry that never changes | bindings::forward::static::Buffer |
| Textures loaded from disk | bindings::forward::static::Texture |
| Camera matrices updated per frame | bindings::forward::dynamic::Buffer |
| Render-to-texture targets | bindings::forward::dynamic::FrameTexture |
| Particle positions (CPU generated) | bindings::forward::dynamic::Buffer |
| Lookup tables for shaders | bindings::forward::static::Buffer or Texture |
Currently implemented:
Examples include:
| Class | Use case | Potential optimizations | Multibuffering | Synchronization |
|---|---|---|---|---|
| Static | Sprites, etc | Convert to a private, GPU-native format | Not needed | Not needed |
| Forward | Write CPU->GPU | Unified vs discrete memory | Builtin | Builtin |
| Reverse | Write GPU->CPU | Unified vs discrete memory | Builtin | Builtin |
| Sideways | Write GPU->GPU | private, GPU-native format | Builtin | TBD |
images_and_words uses a backend abstraction that allows different GPU API implementations. Currently, two backends are available:
nop backend: A no-operation stub implementation useful for testing and as a template for new backendswgpu backend: The main production backend built on wgpu, providing broad platform supportThe wgpu backend inherits support for:
The codebase is organized into several key modules:
images)Provides the main rendering infrastructure:
Engine: Main entry point for GPU operationsrender_pass: Render pass configuration and draw commandsshader: Vertex and fragment shader managementview: Display surface abstractionport: Viewport and camera managementprojection: Coordinate systems and transformationsbindings)Higher-order GPU resource types:
forward: CPU→GPU data transfer types
static: Immutable resourcesdynamic: Mutable resources with multibufferingsoftware: CPU-side texture operationssampler: Texture sampling configurationpixel_formats)Type-safe pixel format definitions:
I have intentionally designed images_and_words to support multiple backends. Currently the crate uses wgpu for its broad platform support. However I also have direct Vulkan and Metal backends in various stages of development.
Ultimately my goals are:
Currently this translates into these tiers:
Legend:
For the time being, we need to support this in demos and the wgpu_webgl feature because:
But you should expect this backend to be cut because:
Create a rendering engine and access the main rendering port:
use images_and_words::images::Engine;
// Create a rendering engine for testing
let engine = Engine::for_testing().await
.expect("Failed to create engine");
// Access the main rendering port
let mut port = engine.main_port_mut();
// Port is now ready for rendering operations
use images_and_words::{
images::Engine,
bindings::{forward::r#static::buffer::Buffer, visible_to::GPUBufferUsage},
};
// Define a vertex type with C-compatible layout
#[repr(C)]
#[derive(Copy, Clone, Debug)]
struct Vertex {
position: [f32; 3],
color: [f32; 4],
}
unsafe impl images_and_words::bindings::forward::dynamic::buffer::CRepr for Vertex {}
let engine = Engine::for_testing().await.unwrap();
let device = engine.bound_device();
// Create a static buffer with 3 vertices
let vertex_buffer = Buffer::new(
device.clone(),
3, // count of vertices
GPUBufferUsage::VertexBuffer,
"triangle_vertices",
|index| match index {
0 => Vertex { position: [-0.5, -0.5, 0.0], color: [1.0, 0.0, 0.0, 1.0] },
1 => Vertex { position: [ 0.5, -0.5, 0.0], color: [0.0, 1.0, 0.0, 1.0] },
2 => Vertex { position: [ 0.0, 0.5, 0.0], color: [0.0, 0.0, 1.0, 1.0] },
_ => unreachable!()
}
).await.expect("Failed to create buffer");
Dynamic buffers support automatic multibuffering to prevent GPU pipeline stalls when updating data:
use images_and_words::{
bindings::{forward::dynamic::buffer, visible_to::GPUBufferUsage},
};
// Define a uniform data structure
#[repr(C)]
struct UniformData {
time: f32,
_padding: [f32; 3],
}
unsafe impl buffer::CRepr for UniformData {}
let uniform_data = UniformData {
time: 1.0,
_padding: [0.0; 3]
};
// This creates a buffer with automatic multibuffering:
// let buffer = Buffer::new(device, 1, GPUBufferUsage::FragmentShaderRead,
// "uniforms", |_| uniform_data).await?;
use images_and_words::{
images::{Engine, projection::WorldCoord, view::View},
bindings::{forward::r#static::buffer::Buffer, visible_to::GPUBufferUsage},
};
// Create engine with camera position
let engine = Engine::rendering_to(
View::for_testing(),
WorldCoord::new(0.0, 0.0, 5.0)
).await.expect("Failed to create engine");
let device = engine.bound_device();
// Define vertex data
#[repr(C)]
struct Vertex {
position: [f32; 3],
}
unsafe impl images_and_words::bindings::forward::dynamic::buffer::CRepr for Vertex {}
// Create GPU buffer with vertex data
let vertex_buffer = Buffer::new(
device.clone(),
3,
GPUBufferUsage::VertexBuffer,
"triangle",
|index| match index {
0 => Vertex { position: [-1.0, -1.0, 0.0] },
1 => Vertex { position: [ 1.0, -1.0, 0.0] },
2 => Vertex { position: [ 0.0, 1.0, 0.0] },
_ => unreachable!()
}
).await.expect("Failed to create vertex buffer");
// Access the rendering port
let mut port = engine.main_port_mut();
// Ready to issue draw commands
Dynamic resources automatically use multibuffering to prevent GPU pipeline stalls:
Static resources are automatically placed in optimal GPU memory when possible, while dynamic resources use accessible memory for frequent updates.
This project uses custom async executors (not tokio):
test_executors for test codesome_executor for production codeGraphics operations typically require main thread execution, especially on platforms like macOS.
On macOS, set the deployment target: export MACOSX_DEPLOYMENT_TARGET=15
Build: cargo build --features=backend_wgpu
Build with app window support: cargo build --features=backend_wgpu,app_window
Build for WASM: ./build/wasm_example.sh simple_scene
Run all tests: cargo test --features=backend_wgpu,testing
Run single test: cargo test --features=backend_wgpu,testing test_name
Run specific test file:
cargo test --features=backend_wgpu,testing --test buffer_performancecargo test --features=backend_wgpu,testing --test sendable_futurescargo test --features=backend_wgpu,testing --test texture_alignmentcargo test --features=backend_wgpu,testing --test wgpu_cell_threading_errorRun clippy: cargo clippy --features=backend_wgpu
Format check: cargo fmt --check
Quick check script (runs all validations): ./quickcheck.sh
Build and open docs: cargo doc --features=backend_wgpu --no-deps --open
Run simple scene: cargo run --example simple_scene --features=backend_wgpu,app_window
Run animated scene: cargo run --example animated_scene --features=backend_wgpu,app_window
backend_wgpu - Enables the wgpu GPU backend (required for most development)app_window - Enables window surface creation for applicationstesting - Enables testing utilitieswgpu_webgl - Enables WebGL backend for wgpu (for web targets)logwise_internal - Internal logging featuresThe project supports WebAssembly targets with special configuration:
wasm32-unknown-unknown target./build/wasm_example.sh [example_name]-C target-feature=+atomicsUses logwise for logging. Example syntax:
logwise::info_sync!("Here is foo: {foo}", foo=3);
Complex types require coercion through logwise::privacy:
logwise::warn_sync!("Here is foo: {foo}", foo=logwise::privacy::LogIt(example));
If you are motivated enough to consider writing your own solution, I would love to have your help here instead.