| Crates.io | awsm-renderer-editor |
| lib.rs | awsm-renderer-editor |
| version | 0.1.3 |
| created_at | 2026-01-14 15:10:33.293464+00 |
| updated_at | 2026-01-14 16:32:32.282968+00 |
| description | awsm-renderer-editor |
| homepage | https://dakom.github.io/awsm-renderer/ |
| repository | https://github.com/dakom/awsm-renderer |
| max_upload_size | |
| id | 2042999 |
| size | 73,956 |
Rust/WASM/WebGPU renderer for the web.
It's specifically for the web in that it does uses the WebGPU API directly via the web-sys bindings as opposed to wgpu. While this is somewhat unconventional in the Rust ecosystem, it allows for a more direct mapping to the WebGPU API for precise control and understanding of how things work under the hood in a web context.
See ROADMAP for details.
Should be pretty fast! Detailed performance annotations can be seen in the browser's devtools performance tab after recording a session.
There's a lot to unpack here: render passes, buffers, shaders, pipelines... for the sake of brevity, here's the high-level overview of some of the key tradeoffs and design decisions:
The core rendering is done in these main passes:
Geometry Pass: Renders all opaque geometry into a few targets with a minimal set of data needed for the next step. It's a draw call per mesh, but still extremely fast: no texture lookups, no shading, no material-specific logic. Just geometry transformations (including morphs/skins) and writing out a few values per-pixel. Depth testing/writing is enabled here so it benefits from occlusion culling too. One of the key mechanics is that we pass barycentric coordinates per-texel, which allows us to reconstruct interpolated values in the next pass.
Opaque Pass: Uses the data from the geometry pass, along with all the other available data (texture bindings, material info, etc.) to shade all the pixels in one draw call. Since this only shades visible pixels, it's much faster than traditional forward rendering. Since the "g-buffer" only contains geometry info, it's also much more flexible than traditional deferred rendering since it supports any number of materials in the single draw call.
Transparent Pass: Renders all transparent geometry via traditional forward rendering, on top of the opaque pass result. This is necessary since the opaque pipeline needs to know the exact identifer of a given pixel, and alpha blending breaks that. However, the transparent pass can still take advantage of early-z testing by using the same depth buffer from visibility pass, pipeline sorting to minimize state changes, etc. Also, the majority of renderables are typically opaque, so this is still a minor tradeoff overall.
Effects: Applies several post-processing effects (bloom, dof, etc.) to the final image.
Display Pass: Applies tone-mapping to the final image before presenting it to the screen.
There's a few more implmentation details around msaa, hooks, and hud rendering as well, but those are the main passes.
The overall idea is that we load all the data we need for rendering into GPU buffers ahead of time, and then reference that data via offsets when issuing draw calls.
Updating the data is easy, using "keys" (TransformKey, MeshKey, MaterialKey, etc.) and Rust-friendly structs. Under the hood, updates mark the GPU buffers as "dirty" so they get re-uploaded at the start of the next frame via one big memcpy per-buffer. This makes it very efficient to update data many times per-frame if needed (e.g. for physics).
Nearly all the data goes through one of two mechanisms:
As the data grows, an occasional re-allocation is needed, but this is infrequent and handled automatically.
This is a bit involved since we explode the triangle vertices in the geometry pass and need to access the original per-vertex attributes in different ways throughout the renderer. For more info on how vertex attributes are handled and split into different buffers, see Vertex Attributes.
Textures are managed in texture pools, which are essentially arrays of textures of the same size and format. This allows for easy binding and staying under limits in shaders.
The pool can grow as needed, but it requires signaling the changes to shader generation, and so it's typically done infrequently like right after all images are downloaded.
Many things can cause a bind group to need to be re-created: resized buffers, new render views, texture pool changes, etc.
Instead of wiring all that logic directly, we broadcast various "events" that indicate what changed, and the relevant systems listen for those events and update their bind groups as needed at the start of the next frame.
Shaders are written with Askama templates, allowing for code reuse and easy-to-reason-about caching based on different variables exposed to the template.
Speaking of caching many things are cached to avoid redundant work and state changes, including pipelines, layouts, shaders, etc.
GLTF is supported as a first-class citizen, with support for PBR materials, skins, morphs, animations, and more.
It's de-facto the format for AwsmRenderer assets, and extensions are used where appropriate to support features not in the core spec (e.g. texture transforms, unlit materials, etc.)
Because the geometry pass writes out unique identifiers per-mesh, picking opaque meshes is as simple as reading back the pixel under the mouse cursor from that target, and mapping it back to the corresponding mesh. This makes picking opaque meshes extremely fast and efficient, even with complex scenes, without significant overhead during rendering.
See docs/DEVELOPMENT.md for details on setting up the development environment, building, and running the examples.
This is a renderer, not a full game engine or framework. There is no entity-component-system (ECS) or any other opinionated way to organize game objects.
However, there is a transform-based scene graph, and all the data structures are designed to be very easy and efficient to manipulate and integrate with an ECS or other game framework by way of "keys" (TransformKey, MeshKey, MaterialKey, etc.)
Feel free to think of these keys as components and assign them to some EntityId of your choice.
The renderer does include transformation, morphs, skins, and animation support, but does not include any physics engine or collision detection.
It's expected that another subsystem using this renderer would handle physics/collision detection separately, and provide the resulting transforms/animations to the renderer.
This really depends on the specific needs of the project. Some examples:
However, due to the visibility buffer optimization, the impact of rendering unnecessary geometry does not reach the shading stage. Also, frustum culling will eliminate other game world objects... so the only optimization would really be to reduce the frustum culling tests which are already very cheap.
I've taken some stabs at some variation of this sorta thing before, got a few battle scars along the way. Some projects got further than others: