| Crates.io | fastalloc |
| lib.rs | fastalloc |
| version | 1.5.0 |
| created_at | 2025-10-15 20:57:34.942076+00 |
| updated_at | 2025-10-30 06:47:59.928221+00 |
| description | High-performance memory pooling library with type-safe handles, predictable latency, and zero fragmentation. Perfect for game engines, real-time systems, and high-churn workloads. |
| homepage | https://github.com/TIVerse/fastalloc |
| repository | https://github.com/TIVerse/fastalloc |
| max_upload_size | |
| id | 1884949 |
| size | 339,720 |
A high-performance memory pooling library for Rust with type-safe handles and zero-cost abstractions
🚀 Up to 1.4x faster allocation with predictable latency and zero fragmentation
🛠 Perfect for: Game engines, real-time systems, embedded applications, and high-churn workloads
fastalloc is a memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.
Multiple Pool Types:
Advanced Allocation Strategies:
Performance Optimizations:
Developer Experience:
A memory pooling library for Rust with type-safe handles and RAII-based memory management. Provides 1.3-1.4x faster allocation than standard heap with the key benefits of predictable latency, zero fragmentation, and excellent cache locality.
Version 1.5.0 - Production-ready release with performance optimizations and comprehensive documentation. Repository: TIVerse/fastalloc.
🚀 Perfect for: Real-time systems, game engines, embedded devices, and high-churn workloads
💡 Key Benefits: Predictable latency, zero fragmentation, improved cache locality, deterministic behavior
Documentation:
Add this to your Cargo.toml:
[dependencies]
fastalloc = "1.0"
use fastalloc::FixedPool;
fn main() {
// Create a pool that can hold up to 1000 integers
let pool = FixedPool::<i32>::new(1000).expect("Failed to create pool");
// Allocate an integer from the pool
let mut handle = pool.allocate(42).expect("Failed to allocate");
// Use the allocated value
*handle += 1;
println!("Value: {}", *handle);
// The handle is automatically returned to the pool when dropped
}
use std::sync::Arc;
use fastalloc::ThreadSafePool;
use std::thread;
fn main() {
// Create a thread-safe pool
let pool = Arc::new(ThreadSafePool::<u64>::new(100).unwrap());
let mut handles = vec![];
for i in 0..10 {
let pool = Arc::clone(&pool);
handles.push(thread::spawn(move || {
let mut value = pool.allocate(i).unwrap();
*value *= 2;
*value
}));
}
for handle in handles {
println!("Thread result: {}", handle.join().unwrap());
}
}
let mut handle = pool.allocate(42).unwrap();
// Use the value
assert_eq!(*handle, 42);
*handle = 100;
assert_eq!(*handle, 100);
// Automatically returned to pool when handle is dropped
drop(handle);
Memory pools significantly improve performance in scenarios with frequent allocations:
| Domain | Use Case | Why It Matters |
|---|---|---|
| 🎮 Game Development | Entities, particles, physics objects | Maintain 60+ FPS by eliminating allocation stutter |
| 🎵 Real-Time Systems | Audio buffers, robotics control loops | Predictable latency for hard real-time constraints |
| 🌐 Web Servers | Request handlers, connection pooling | Handle 100K+ req/sec with minimal overhead |
| 📊 Data Processing | Temporary objects in hot paths | 50-100x speedup in tight loops |
| 🔬 Scientific Computing | Matrices, particles, graph nodes | Process millions of objects efficiently |
| 📱 Embedded Systems | Sensor data, IoT devices | Predictable memory usage, no fragmentation |
| 🤖 Machine Learning | Tensor buffers, batch processing | Reduce training time, optimize inference |
| 💰 Financial Systems | Order books, market data | Ultra-low latency trading systems |
Benchmark Results (criterion.rs, release mode with LTO):
| Operation | fastalloc | Standard Heap | Improvement |
|---|---|---|---|
| Fixed pool allocation (i32) | ~3.5 ns | ~4.8 ns | 1.3-1.4x faster |
| Growing pool allocation | ~4.6 ns | ~4.8 ns | ~1.05x faster |
| Allocation reuse (LIFO) | ~7.2 ns | N/A | Excellent cache locality |
See BENCHMARKS.md for detailed methodology and results.
Memory pools provide benefits beyond raw speed:
Best use cases:
Note: Modern system allocators (jemalloc, mimalloc) are highly optimized. Pools excel in specific scenarios rather than universally. Always benchmark your specific workload.
use fastalloc::{GrowingPool, PoolConfig, GrowthStrategy};
let config = PoolConfig::builder()
.capacity(100)
.max_capacity(Some(1000))
.growth_strategy(GrowthStrategy::Exponential { factor: 2.0 })
.alignment(64) // Cache-line aligned
.build()
.unwrap();
let pool = GrowingPool::with_config(config).unwrap();
use fastalloc::ThreadSafePool;
use std::sync::Arc;
use std::thread;
let pool = Arc::new(ThreadSafePool::<i32>::new(1000).unwrap());
let mut handles = vec![];
for i in 0..4 {
let pool_clone = Arc::clone(&pool);
handles.push(thread::spawn(move || {
let handle = pool_clone.allocate(i * 100).unwrap();
*handle
}));
}
for handle in handles {
println!("Result: {}", handle.join().unwrap());
}
use fastalloc::{PoolConfig, InitializationStrategy};
let config = PoolConfig::builder()
.capacity(100)
.reset_fn(
|| Vec::with_capacity(1024),
|v| v.clear(),
)
.build()
.unwrap();
use fastalloc::FixedPool;
let pool = FixedPool::new(1000).unwrap();
// Allocate multiple objects efficiently in one operation
let values = vec![1, 2, 3, 4, 5];
let handles = pool.allocate_batch(values).unwrap();
assert_eq!(handles.len(), 5);
// All handles automatically returned when dropped
#[cfg(feature = "stats")]
{
use fastalloc::FixedPool;
let pool = FixedPool::<i32>::new(100).unwrap();
// ... use pool ...
let stats = pool.statistics();
println!("Utilization: {:.1}%", stats.utilization_rate());
println!("Total allocations: {}", stats.total_allocations);
}
| Pool Type | Thread Safety | Growth | Overhead | Best For |
|---|---|---|---|---|
| FixedPool | ❌ | Fixed | Minimal | Single-threaded, predictable load |
| GrowingPool | ❌ | Dynamic | Low | Variable workloads |
| ThreadLocalPool | ⚠️ Per-thread | Fixed | Minimal | High-throughput parallel |
| ThreadSafePool | ✅ | Fixed | Medium | Shared state, moderate contention |
Pre-allocated fixed-size pool with O(1) operations and zero fragmentation.
let pool = FixedPool::<i32>::new(1000).unwrap();
When to use: Known maximum capacity, need absolute predictability
Dynamic pool that grows based on demand according to a configurable strategy.
let pool = GrowingPool::with_config(config).unwrap();
When to use: Variable load, want automatic scaling
Per-thread pool that avoids synchronization overhead.
let pool = ThreadLocalPool::<i32>::new(100).unwrap();
When to use: Rayon/parallel iterators, zero-contention needed
Lock-based concurrent pool safe for multi-threaded access.
let pool = ThreadSafePool::<i32>::new(1000).unwrap();
When to use: Shared pool across threads, moderate contention acceptable
Enable optional features in your Cargo.toml:
[dependencies]
fastalloc = { version = "1.0", features = ["stats", "serde", "parking_lot"] }
Available features:
| Feature | Description | Performance Impact |
|---|---|---|
std (default) |
Standard library support | N/A |
stats |
Pool statistics & monitoring | ~2% overhead |
serde |
Serialization support | None when unused |
parking_lot |
Faster mutex (vs std::sync) | 10-20% faster locking |
crossbeam |
Lock-free data structures | 30-50% better under contention |
tracing |
Structured instrumentation | Minimal when disabled |
lock-free |
Experimental lock-free pool | 2-3x faster (requires crossbeam) |
fastalloc works in no_std environments:
[dependencies]
fastalloc = { version = "1.0", default-features = false }
Run benchmarks with:
cargo bench
Benchmark results are available in the target/criterion directory after running the benchmarks.
Full API documentation is available on docs.rs.
Explore the examples/ directory for more usage examples:
basic_usage.rs - Basic pool usagethread_safe.rs - Thread-safe poolingcustom_allocator.rs - Implementing custom allocation strategiesembedded.rs - no_std usage exampleSee CHANGELOG.md for a detailed list of changes in each version.
We welcome contributions of all kinds! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
# Clone the repository
git clone https://github.com/TIVerse/fastalloc.git
cd fastalloc
# Install development dependencies
rustup component add rustfmt clippy
# Run tests
cargo test --all-features
# Run benchmarks
cargo bench
# Run lints
cargo clippy --all-targets -- -D warnings
cargo fmt -- --check
# Check for unused dependencies
cargo +nightly udeps
# Check for security vulnerabilities
cargo audit
Security is important to us. If you discover any security related issues:
We will acknowledge receipt within 48 hours and provide a timeline for a fix. Security issues will be prioritized and patched in expedited releases.
See SECURITY.md for our full security policy.
Licensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
We're building a list of projects using fastalloc. If you're using it, please consider adding your project!
Open Source Projects:
Use Cases in Production:
Research & Education:
Want to be listed? Open a PR or issue with your project details!
basic_usage.rs - Getting started with FixedPoolthread_safe.rs - Concurrent pool usagecustom_allocator.rs - Custom allocation strategiesgame_entities.rs - Game entity pooling exampleparticle_system.rs - High-performance particle systemasync_usage.rs - Using pools with async/awaitembedded.rs - no_std embedded examplestatistics.rs - Pool monitoring and statisticsSee CHANGELOG.md for version history.