framealloc

Crates.ioframealloc
lib.rsframealloc
version0.11.1
created_at2025-12-20 10:56:02.7964+00
updated_at2025-12-26 00:09:49.875777+00
descriptionIntent-aware, thread-smart memory allocation for Rust game engines
homepagehttps://github.com/YelenaTor/framealloc
repositoryhttps://github.com/YelenaTor/framealloc
max_upload_size
id1996344
size942,027
YoruXIII (YelenaTor)

documentation

https://docs.rs/framealloc

README

⚠ïļ Deprecation Notice

framealloc is being deprecated in favor of memkit.

As of v0.11.1, all public types emit deprecation warnings. The library remains fully functional, but will not receive new features. memkit is a complete rewrite with a cleaner architecture, better modularity, and expanded capabilities.

I sincerely apologize if this transition causes disruption to your projects. Maintaining multiple versions long-term isn't feasible, and memkit represents the better path forward. A migration guide will be provided with memkit's release.

Thank you for using framealloc. 🙏

— Yelena


framealloc

Intent-driven memory allocation for high-performance Rust applications

Crates.io Documentation License CI X/Twitter

Why â€Ē Docs â€Ē Quick Start â€Ē Features â€Ē GPU Support â€Ē Static Analysis


Overview

framealloc is a deterministic, frame-based memory allocation library for Rust game engines and real-time applications. It provides predictable performance through explicit lifetimes and scales seamlessly from single-threaded to multi-threaded workloads.

Not a general-purpose allocator replacement. Purpose-built for game engines, renderers, simulations, and real-time systems.

Key Capabilities

Capability Description
Frame Arenas Lock-free bump allocation, reset per frame
Object Pools O(1) reuse for small, frequent allocations
Thread Coordination Explicit transfers, barriers, per-thread budgets
Static Analysis cargo fa catches memory mistakes at build time
Runtime Diagnostics Behavior filter detects pattern violations

Why framealloc?

Traditional allocators (malloc, jemalloc) optimize for general-case throughput. Game engines have different needs:

The Problem:

for frame in 0..1000000 {
    let contacts: Vec<Contact> = physics.detect_collisions();
    // 1000+ malloc calls per frame
    // Memory scattered across heap
    // Fragmentation builds up
    // Unpredictable frame times
}

The framealloc Solution:

let alloc = SmartAlloc::new(Default::default());

for frame in 0..1000000 {
    alloc.begin_frame();
    let contacts = alloc.frame_vec::<Contact>();
    // Single bump pointer, contiguous memory, cache-friendly
    alloc.end_frame();
    // Everything freed in O(1), zero fragmentation
}

Results:

  • 139x faster than malloc for batch allocations
  • Stable frame times — no GC pauses, no fragmentation
  • Explicit lifetimes — frame/pool/heap explicit in code
  • Observable — know exactly where memory goes

Documentation & Learning Path

Getting Started (0-2 hours)

Getting Started Guide — Install, write your first allocation, understand core concepts.

Start here if: You're evaluating framealloc or just installed it.

Common Patterns (2-20 hours)

Patterns Guide — Frame loops, threading, organization, common pitfalls.

Start here if: You've used framealloc basics and want to structure real applications.

Domain Guides

Domain Guide Description
Game Development Game Dev Guide ECS, rendering, audio, level streaming
Physics Rapier Integration Contact generation, queries, performance
Async Async Guide Safe patterns, TaskAlloc, avoiding frame violations
Performance Performance Guide Batch allocation, profiling, benchmarks

Advanced Topics (20-100 hours)

Advanced Guide — Custom allocators, internals, NUMA awareness, instrumentation.

Start here if: You're extending framealloc or need maximum performance.

Reference

Resource Description
API Documentation Complete API reference
Cookbook Copy-paste recipes for common tasks
Migration Guide Coming from other allocators
Troubleshooting Common issues and solutions
TECHNICAL.md Architecture and implementation details
CHANGELOG.md Version history

Examples

# Beginner (0-2 hours)
cargo run --example 01_hello_framealloc    # Simplest: begin_frame, alloc, end_frame
cargo run --example 02_frame_loop          # Typical game loop with frame allocations
cargo run --example 03_pools_and_heaps     # When to use frame vs pool vs heap

# Intermediate (2-20 hours)
cargo run --example 04_threading           # TransferHandle and FrameBarrier
cargo run --example 05_tags_and_budgets    # Organizing allocations, enforcing limits

# Advanced (20+ hours)
cargo run --example 06_custom_allocator    # Implementing AllocatorBackend
cargo run --example 07_batch_optimization  # Using frame_alloc_batch for particles

Coming From...

Default Rust (Vec, Box):

// Before:                      // After:
let scratch = vec![0u8; 1024];  let scratch = alloc.frame_slice::<u8>(1024);

bumpalo:

// bumpalo:                     // framealloc:
let bump = Bump::new();         alloc.begin_frame();
let x = bump.alloc(42);         let x = alloc.frame_alloc::<i32>();
bump.reset();                   alloc.end_frame();

C++ game allocators: Frame allocators → frame_alloc() | Object pools → pool_alloc() | Custom → AllocatorBackend trait

See Migration Guide for detailed conversion steps.


Quick Start

Basic Usage

use framealloc::{SmartAlloc, AllocConfig};

fn main() {
    let alloc = SmartAlloc::new(AllocConfig::default());

    loop {
        alloc.begin_frame();
        let temp = alloc.frame_alloc::<TempData>();
        alloc.end_frame();
    }
}

Bevy Integration

use bevy::prelude::*;
use framealloc::bevy::SmartAllocPlugin;

fn main() {
    App::new()
        .add_plugins(DefaultPlugins)
        .add_plugins(SmartAllocPlugin::default())
        .run();
}

fn physics_system(alloc: Res<framealloc::bevy::AllocResource>) {
    let contacts = alloc.frame_vec::<Contact>();
}

Features

Core Allocation

use framealloc::{SmartAlloc, AllocConfig};

let alloc = SmartAlloc::new(AllocConfig::default());

loop {
    alloc.begin_frame();
    
    // Frame allocation — bump pointer, no locks
    let scratch = alloc.frame_alloc::<[f32; 1024]>();
    
    // Pool allocation — O(1) from free list
    let entity = alloc.pool_alloc::<EntityData>();
    
    // Tagged allocation — attribute to subsystem
    alloc.with_tag("physics", |a| {
        let contacts = a.frame_vec::<Contact>();
    });
    
    alloc.end_frame();
}

Thread Coordination (v0.6.0)

// Explicit cross-thread transfers
let handle = alloc.frame_box_for_transfer(data);
worker_channel.send(handle);

// Frame barriers for deterministic sync
let barrier = FrameBarrier::new(3);
barrier.signal_frame_complete();
barrier.wait_all();

// Per-thread budgets
alloc.set_thread_frame_budget(megabytes(8));

IDE Integration (v0.7.0)

fa-insight — VS Code extension for framealloc-aware development:

fn physics_update(alloc: &SmartAlloc) {  // ðŸ’ū 2.1 MB ↗ 📊
    // CodeLens shows: current usage, trend, sparkline
    alloc.with_tag("physics", |a| {
        let contacts = a.frame_vec::<Contact>();
    });
}

Features: CodeLens memory display, trend graphs, budget alerts at 80%+ usage.

Install: Search "FA Insight" in VS Code Marketplace

Tokio Integration (v0.8.0)

use framealloc::tokio::{TaskAlloc, AsyncPoolGuard};

// Main thread: frame allocations OK
alloc.begin_frame();
let scratch = alloc.frame_vec::<f32>();

// Async tasks: use TaskAlloc (pool-backed, auto-cleanup)
tokio::spawn(async move {
    let mut task = TaskAlloc::new(&alloc_clone);
    let data = task.alloc_box(load_asset().await);
});

alloc.end_frame();

Key principle: Frame allocations stay on main thread, async tasks use pool/heap.

Enable: framealloc = { version = "0.10", features = ["tokio"] }

See Async Guide for the full async safety guide.

Batch Allocations (v0.9.0)

⚠ïļ SAFETY FIRST: Batch APIs use raw pointers

139x faster than individual allocations, but requires unsafe:

let items = alloc.frame_alloc_batch::<Item>(1000);

// SAFETY REQUIREMENTS:
// 1. Indices must be within 0..count
// 2. Must initialize with std::ptr::write before reading
// 3. Pointers invalid after end_frame()
// 4. Not Send/Sync - don't pass to other threads

unsafe {
    for i in 0..1000 {
        let item = items.add(i);
        std::ptr::write(item, Item::new(i));
    }
}

Specialized sizes (zero overhead, no unsafe):

let [a, b] = alloc.frame_alloc_2::<Vec2>();       // Pairs
let [a, b, c, d] = alloc.frame_alloc_4::<Vertex>(); // Quads
let items = alloc.frame_alloc_8::<u64>();         // Cache line

Rapier Physics Integration (v0.10.0)

Frame-aware wrappers for Rapier physics engine v0.31:

use framealloc::{SmartAlloc, rapier::PhysicsWorld2D};

let mut physics = PhysicsWorld2D::new();

alloc.begin_frame();
let events = physics.step_with_events(&alloc);
for contact in events.contacts {
    println!("Contact: {:?}", contact);
}
alloc.end_frame();

Why Rapier v0.31 matters: Rapier v0.31 refactored broad-phase and query APIs. If you're using Rapier â‰Īv0.30, use framealloc v0.9.0 instead.

Enable: framealloc = { version = "0.10", features = ["rapier"] }

See Rapier Integration Guide for full documentation.


GPU Support (v0.11.0)

framealloc now supports unified CPU-GPU memory management with clean separation and optional GPU backends.

Architecture

  • CPU Module: Always available, zero GPU dependencies
  • GPU Module: Feature-gated (gpu), backend-agnostic traits
  • Coordinator Module: Bridges CPU and GPU (coordinator feature)

Feature Flags

# Enable GPU support (no backend yet)
framealloc = { version = "0.11", features = ["gpu"] }

# Enable Vulkan backend
framealloc = { version = "0.11", features = ["gpu-vulkan"] }

# Enable unified CPU-GPU coordination
framealloc = { version = "0.11", features = ["gpu-vulkan", "coordinator"] }

Quick Example

#[cfg(feature = "coordinator")]
use framealloc::coordinator::UnifiedAllocator;
use framealloc::gpu::traits::{BufferUsage, MemoryType};

// Create unified allocator
let mut unified = UnifiedAllocator::new(cpu_alloc, gpu_alloc);

// Begin frame
unified.begin_frame();

// Create staging buffer for CPU-GPU transfer
let staging = unified.create_staging_buffer(2048)?;
if let Some(slice) = staging.cpu_slice_mut() {
    slice.copy_from_slice(&vertex_data);
}

// Transfer to GPU
unified.transfer_to_gpu(&mut staging)?;

// Check usage
let (cpu_mb, gpu_mb) = unified.get_usage();
println!("CPU: {} MB, GPU: {} MB", cpu_mb / 1024 / 1024, gpu_mb / 1024 / 1024);

unified.end_frame();

Key Benefits

  • Zero overhead for CPU-only users (no new deps)
  • Backend-agnostic GPU traits (Vulkan today, more tomorrow)
  • Unified budgeting across CPU and GPU memory
  • Explicit transfers - no hidden synchronization costs

GPU Backend Roadmap

Why Vulkan First? Vulkan provides the most explicit control over memory allocation, making it ideal for demonstrating framealloc's intent-driven approach. Its low-level nature exposes all the memory concepts we abstract (device-local, host-visible, staging buffers), serving as the perfect reference implementation.

Planned Backend Support

Platform Status Notes
Vulkan ✅ Available Low-level, explicit memory control
Direct3D 11/12 🔄 Planned Windows gaming platforms
Metal 🔄 Planned Apple ecosystem (iOS/macOS)
WebGPU 🔄 Future Browser-based applications

Generic GPU Usage You can use framealloc's GPU traits without committing to a specific backend:

use framealloc::gpu::{GpuMemoryIntent, GpuLifetime, GpuAllocRequirements};

// Intent-driven allocation works with any backend
let req = GpuAllocRequirements::new(
    size,
    GpuMemoryIntent::Staging,  // Expresses WHAT, not HOW
    GpuLifetime::Frame,        // Clear lifetime semantics
);

// Backend-agnostic allocation
let buffer = allocator.allocate(req)?;

The intent-based design ensures your code remains portable as new backends are added. Simply swap the allocator implementation without changing allocation logic.


Static Analysis

cargo-fa detects memory intent violations before runtime.

cargo install --path cargo-fa

# Check specific categories
cargo fa --dirtymem       # Frame escape, hot loop allocations
cargo fa --async-safety   # Async/await boundary issues
cargo fa --threading      # Cross-thread frame access
cargo fa --all            # Run all checks

# CI integration
cargo fa --all --format sarif  # GitHub Actions
Range Category Examples
FA2xx Threading Cross-thread access, barrier mismatch
FA6xx Lifetime Frame escape, hot loops, missing boundaries
FA7xx Async Allocation across await, closure capture
FA9xx Rapier QueryFilter import, step_with_events usage

Cargo Features

Feature Description
bevy Bevy ECS plugin integration
rapier Rapier physics engine integration
tokio Async/await support with Tokio
parking_lot Faster mutex implementation
debug Memory poisoning, allocation backtraces
minimal Disable statistics for max performance
prefetch Hardware prefetch hints (x86_64)

Performance

Allocation priority minimizes latency:

  1. Frame arena — Bump pointer increment, no synchronization
  2. Thread-local pools — Free list pop, no contention
  3. Global pool refill — Mutex-protected, batched
  4. System heap — Fallback for oversized allocations

In typical game workloads, 90%+ of allocations hit the frame arena path.


License

Licensed under either of:

at your option.

Commit count: 0

cargo fmt