| Crates.io | framealloc |
| lib.rs | framealloc |
| version | 0.11.1 |
| created_at | 2025-12-20 10:56:02.7964+00 |
| updated_at | 2025-12-26 00:09:49.875777+00 |
| description | Intent-aware, thread-smart memory allocation for Rust game engines |
| homepage | https://github.com/YelenaTor/framealloc |
| repository | https://github.com/YelenaTor/framealloc |
| max_upload_size | |
| id | 1996344 |
| size | 942,027 |
â ïļ Deprecation Notice
framealloc is being deprecated in favor of memkit.
As of v0.11.1, all public types emit deprecation warnings. The library remains fully functional, but will not receive new features. memkit is a complete rewrite with a cleaner architecture, better modularity, and expanded capabilities.
I sincerely apologize if this transition causes disruption to your projects. Maintaining multiple versions long-term isn't feasible, and memkit represents the better path forward. A migration guide will be provided with memkit's release.
Thank you for using framealloc. ð
â Yelena
Intent-driven memory allocation for high-performance Rust applications
Why âĒ Docs âĒ Quick Start âĒ Features âĒ GPU Support âĒ Static Analysis
framealloc is a deterministic, frame-based memory allocation library for Rust game engines and real-time applications. It provides predictable performance through explicit lifetimes and scales seamlessly from single-threaded to multi-threaded workloads.
Not a general-purpose allocator replacement. Purpose-built for game engines, renderers, simulations, and real-time systems.
| Capability | Description |
|---|---|
| Frame Arenas | Lock-free bump allocation, reset per frame |
| Object Pools | O(1) reuse for small, frequent allocations |
| Thread Coordination | Explicit transfers, barriers, per-thread budgets |
| Static Analysis | cargo fa catches memory mistakes at build time |
| Runtime Diagnostics | Behavior filter detects pattern violations |
Traditional allocators (malloc, jemalloc) optimize for general-case throughput. Game engines have different needs:
The Problem:
for frame in 0..1000000 {
let contacts: Vec<Contact> = physics.detect_collisions();
// 1000+ malloc calls per frame
// Memory scattered across heap
// Fragmentation builds up
// Unpredictable frame times
}
The framealloc Solution:
let alloc = SmartAlloc::new(Default::default());
for frame in 0..1000000 {
alloc.begin_frame();
let contacts = alloc.frame_vec::<Contact>();
// Single bump pointer, contiguous memory, cache-friendly
alloc.end_frame();
// Everything freed in O(1), zero fragmentation
}
Results:
Getting Started Guide â Install, write your first allocation, understand core concepts.
Start here if: You're evaluating framealloc or just installed it.
Patterns Guide â Frame loops, threading, organization, common pitfalls.
Start here if: You've used framealloc basics and want to structure real applications.
| Domain | Guide | Description |
|---|---|---|
| Game Development | Game Dev Guide | ECS, rendering, audio, level streaming |
| Physics | Rapier Integration | Contact generation, queries, performance |
| Async | Async Guide | Safe patterns, TaskAlloc, avoiding frame violations |
| Performance | Performance Guide | Batch allocation, profiling, benchmarks |
Advanced Guide â Custom allocators, internals, NUMA awareness, instrumentation.
Start here if: You're extending framealloc or need maximum performance.
| Resource | Description |
|---|---|
| API Documentation | Complete API reference |
| Cookbook | Copy-paste recipes for common tasks |
| Migration Guide | Coming from other allocators |
| Troubleshooting | Common issues and solutions |
| TECHNICAL.md | Architecture and implementation details |
| CHANGELOG.md | Version history |
# Beginner (0-2 hours)
cargo run --example 01_hello_framealloc # Simplest: begin_frame, alloc, end_frame
cargo run --example 02_frame_loop # Typical game loop with frame allocations
cargo run --example 03_pools_and_heaps # When to use frame vs pool vs heap
# Intermediate (2-20 hours)
cargo run --example 04_threading # TransferHandle and FrameBarrier
cargo run --example 05_tags_and_budgets # Organizing allocations, enforcing limits
# Advanced (20+ hours)
cargo run --example 06_custom_allocator # Implementing AllocatorBackend
cargo run --example 07_batch_optimization # Using frame_alloc_batch for particles
Default Rust (Vec, Box):
// Before: // After:
let scratch = vec![0u8; 1024]; let scratch = alloc.frame_slice::<u8>(1024);
bumpalo:
// bumpalo: // framealloc:
let bump = Bump::new(); alloc.begin_frame();
let x = bump.alloc(42); let x = alloc.frame_alloc::<i32>();
bump.reset(); alloc.end_frame();
C++ game allocators: Frame allocators â frame_alloc() | Object pools â pool_alloc() | Custom â AllocatorBackend trait
See Migration Guide for detailed conversion steps.
use framealloc::{SmartAlloc, AllocConfig};
fn main() {
let alloc = SmartAlloc::new(AllocConfig::default());
loop {
alloc.begin_frame();
let temp = alloc.frame_alloc::<TempData>();
alloc.end_frame();
}
}
use bevy::prelude::*;
use framealloc::bevy::SmartAllocPlugin;
fn main() {
App::new()
.add_plugins(DefaultPlugins)
.add_plugins(SmartAllocPlugin::default())
.run();
}
fn physics_system(alloc: Res<framealloc::bevy::AllocResource>) {
let contacts = alloc.frame_vec::<Contact>();
}
use framealloc::{SmartAlloc, AllocConfig};
let alloc = SmartAlloc::new(AllocConfig::default());
loop {
alloc.begin_frame();
// Frame allocation â bump pointer, no locks
let scratch = alloc.frame_alloc::<[f32; 1024]>();
// Pool allocation â O(1) from free list
let entity = alloc.pool_alloc::<EntityData>();
// Tagged allocation â attribute to subsystem
alloc.with_tag("physics", |a| {
let contacts = a.frame_vec::<Contact>();
});
alloc.end_frame();
}
// Explicit cross-thread transfers
let handle = alloc.frame_box_for_transfer(data);
worker_channel.send(handle);
// Frame barriers for deterministic sync
let barrier = FrameBarrier::new(3);
barrier.signal_frame_complete();
barrier.wait_all();
// Per-thread budgets
alloc.set_thread_frame_budget(megabytes(8));
fa-insight â VS Code extension for framealloc-aware development:
fn physics_update(alloc: &SmartAlloc) { // ðū 2.1 MB â ð
// CodeLens shows: current usage, trend, sparkline
alloc.with_tag("physics", |a| {
let contacts = a.frame_vec::<Contact>();
});
}
Features: CodeLens memory display, trend graphs, budget alerts at 80%+ usage.
Install: Search "FA Insight" in VS Code Marketplace
use framealloc::tokio::{TaskAlloc, AsyncPoolGuard};
// Main thread: frame allocations OK
alloc.begin_frame();
let scratch = alloc.frame_vec::<f32>();
// Async tasks: use TaskAlloc (pool-backed, auto-cleanup)
tokio::spawn(async move {
let mut task = TaskAlloc::new(&alloc_clone);
let data = task.alloc_box(load_asset().await);
});
alloc.end_frame();
Key principle: Frame allocations stay on main thread, async tasks use pool/heap.
Enable: framealloc = { version = "0.10", features = ["tokio"] }
See Async Guide for the full async safety guide.
â ïļ SAFETY FIRST: Batch APIs use raw pointers
139x faster than individual allocations, but requires unsafe:
let items = alloc.frame_alloc_batch::<Item>(1000);
// SAFETY REQUIREMENTS:
// 1. Indices must be within 0..count
// 2. Must initialize with std::ptr::write before reading
// 3. Pointers invalid after end_frame()
// 4. Not Send/Sync - don't pass to other threads
unsafe {
for i in 0..1000 {
let item = items.add(i);
std::ptr::write(item, Item::new(i));
}
}
Specialized sizes (zero overhead, no unsafe):
let [a, b] = alloc.frame_alloc_2::<Vec2>(); // Pairs
let [a, b, c, d] = alloc.frame_alloc_4::<Vertex>(); // Quads
let items = alloc.frame_alloc_8::<u64>(); // Cache line
Frame-aware wrappers for Rapier physics engine v0.31:
use framealloc::{SmartAlloc, rapier::PhysicsWorld2D};
let mut physics = PhysicsWorld2D::new();
alloc.begin_frame();
let events = physics.step_with_events(&alloc);
for contact in events.contacts {
println!("Contact: {:?}", contact);
}
alloc.end_frame();
Why Rapier v0.31 matters: Rapier v0.31 refactored broad-phase and query APIs. If you're using Rapier âĪv0.30, use framealloc v0.9.0 instead.
Enable: framealloc = { version = "0.10", features = ["rapier"] }
See Rapier Integration Guide for full documentation.
framealloc now supports unified CPU-GPU memory management with clean separation and optional GPU backends.
gpu), backend-agnostic traitscoordinator feature)# Enable GPU support (no backend yet)
framealloc = { version = "0.11", features = ["gpu"] }
# Enable Vulkan backend
framealloc = { version = "0.11", features = ["gpu-vulkan"] }
# Enable unified CPU-GPU coordination
framealloc = { version = "0.11", features = ["gpu-vulkan", "coordinator"] }
#[cfg(feature = "coordinator")]
use framealloc::coordinator::UnifiedAllocator;
use framealloc::gpu::traits::{BufferUsage, MemoryType};
// Create unified allocator
let mut unified = UnifiedAllocator::new(cpu_alloc, gpu_alloc);
// Begin frame
unified.begin_frame();
// Create staging buffer for CPU-GPU transfer
let staging = unified.create_staging_buffer(2048)?;
if let Some(slice) = staging.cpu_slice_mut() {
slice.copy_from_slice(&vertex_data);
}
// Transfer to GPU
unified.transfer_to_gpu(&mut staging)?;
// Check usage
let (cpu_mb, gpu_mb) = unified.get_usage();
println!("CPU: {} MB, GPU: {} MB", cpu_mb / 1024 / 1024, gpu_mb / 1024 / 1024);
unified.end_frame();
Why Vulkan First? Vulkan provides the most explicit control over memory allocation, making it ideal for demonstrating framealloc's intent-driven approach. Its low-level nature exposes all the memory concepts we abstract (device-local, host-visible, staging buffers), serving as the perfect reference implementation.
Planned Backend Support
| Platform | Status | Notes |
|---|---|---|
| Vulkan | â Available | Low-level, explicit memory control |
| Direct3D 11/12 | ð Planned | Windows gaming platforms |
| Metal | ð Planned | Apple ecosystem (iOS/macOS) |
| WebGPU | ð Future | Browser-based applications |
Generic GPU Usage You can use framealloc's GPU traits without committing to a specific backend:
use framealloc::gpu::{GpuMemoryIntent, GpuLifetime, GpuAllocRequirements};
// Intent-driven allocation works with any backend
let req = GpuAllocRequirements::new(
size,
GpuMemoryIntent::Staging, // Expresses WHAT, not HOW
GpuLifetime::Frame, // Clear lifetime semantics
);
// Backend-agnostic allocation
let buffer = allocator.allocate(req)?;
The intent-based design ensures your code remains portable as new backends are added. Simply swap the allocator implementation without changing allocation logic.
cargo-fa detects memory intent violations before runtime.
cargo install --path cargo-fa
# Check specific categories
cargo fa --dirtymem # Frame escape, hot loop allocations
cargo fa --async-safety # Async/await boundary issues
cargo fa --threading # Cross-thread frame access
cargo fa --all # Run all checks
# CI integration
cargo fa --all --format sarif # GitHub Actions
| Range | Category | Examples |
|---|---|---|
| FA2xx | Threading | Cross-thread access, barrier mismatch |
| FA6xx | Lifetime | Frame escape, hot loops, missing boundaries |
| FA7xx | Async | Allocation across await, closure capture |
| FA9xx | Rapier | QueryFilter import, step_with_events usage |
| Feature | Description |
|---|---|
bevy |
Bevy ECS plugin integration |
rapier |
Rapier physics engine integration |
tokio |
Async/await support with Tokio |
parking_lot |
Faster mutex implementation |
debug |
Memory poisoning, allocation backtraces |
minimal |
Disable statistics for max performance |
prefetch |
Hardware prefetch hints (x86_64) |
Allocation priority minimizes latency:
In typical game workloads, 90%+ of allocations hit the frame arena path.
Licensed under either of:
at your option.