| Crates.io | talc |
| lib.rs | talc |
| version | 4.4.3 |
| created_at | 2023-07-21 22:08:05.258031+00 |
| updated_at | 2025-06-14 23:57:52.452969+00 |
| description | A fast and flexible allocator for no_std and WebAssembly |
| homepage | |
| repository | https://github.com/SFBdragon/talc |
| max_upload_size | |
| id | 922740 |
| size | 279,385 |
If you find Talc useful, please consider leaving tip via Paypal
no_std environmentsTargeting WebAssembly? You can find WASM-specific usage and benchmarks here.
As a global allocator:
use talc::*;
static mut ARENA: [u8; 10000] = [0; 10000];
#[global_allocator]
static ALLOCATOR: Talck<spin::Mutex<()>, ClaimOnOom> = Talc::new(unsafe {
// if we're in a hosted environment, the Rust runtime may allocate before
// main() is called, so we need to initialize the arena automatically
ClaimOnOom::new(Span::from_array(core::ptr::addr_of!(ARENA).cast_mut()))
}).lock();
fn main() {
let mut vec = Vec::with_capacity(100);
vec.extend(0..300usize);
}
Or use it as an arena allocator via the Allocator API with spin as follows:
#![feature(allocator_api)]
use talc::*;
use core::alloc::{Allocator, Layout};
static mut ARENA: [u8; 10000] = [0; 10000];
fn main () {
let talck = Talc::new(ErrOnOom).lock::<spin::Mutex<()>>();
unsafe { talck.lock().claim(ARENA.as_mut().into()); }
talck.allocate(Layout::new::<[u32; 16]>());
}
Note that while the spin crate's mutexes are used here, any lock implementing lock_api works.
See General Usage and Advanced Usage for more details.
The average occupied capacity upon first allocation failure when randomly allocating/deallocating/reallocating.
| Allocator | Average Random Actions Heap Efficiency |
|---|---|
| Dlmalloc | 99.14% |
| Rlsf | 99.06% |
| Talc | 98.97% |
| Linked List | 98.36% |
| Buddy Alloc | 63.14% |
The number of successful allocations, deallocations, and reallocations within the allotted time.



Label indicates the maximum within 50 standard deviations from the median. Max allocation size is 0x10000.
Here is the list of important Talc methods:
newget_allocated_span - returns the minimum heap span containing all allocated memory in an established heapget_counters - if feature "counters" is enabled, this returns a struct with allocation statisticsclaim - claim memory to establishing a new heapextend - extend an established heaptruncate - reduce the extent of an established heaplock - wraps the Talc in a Talck, which supports the GlobalAlloc and Allocator APIsmallocfreegrowgrow_in_placeshrinkRead their documentation for more info.
Span is a handy little type for describing memory regions, as trying to manipulate Range<*mut u8> or *mut [u8] or base_ptr-size pairs tends to be inconvenient or annoying.
The most powerful feature of the allocator is that it has a modular OOM handling system, allowing you to fail out of or recover from allocation failure easily.
Provided OomHandler implementations include:
ErrOnOom: allocations fail on OOMClaimOnOom: claims a heap upon first OOM, useful for initializationWasmHandler: itegrate with WebAssembly's memory module for automatic memory heap managementAs an example of a custom implementation, recovering by extending the heap is implemented below.
use talc::*;
struct MyOomHandler {
heap: Span,
}
impl OomHandler for MyOomHandler {
fn handle_oom(talc: &mut Talc<Self>, layout: core::alloc::Layout) -> Result<(), ()> {
// Talc doesn't have enough memory, and we just got called!
// We'll go through an example of how to handle this situation.
// We can inspect `layout` to estimate how much we should free up for this allocation
// or we can extend by any amount (increasing powers of two has good time complexity).
// (Creating another heap with `claim` will also work.)
// This function will be repeatedly called until we free up enough memory or
// we return Err(()) causing allocation failure. Be careful to avoid conditions where
// the heap isn't sufficiently extended indefinitely, causing an infinite loop.
// an arbitrary address limit for the sake of example
const HEAP_TOP_LIMIT: *mut u8 = 0x80000000 as *mut u8;
let old_heap: Span = talc.oom_handler.heap;
// we're going to extend the heap upward, doubling its size
// but we'll be sure not to extend past the limit
let new_heap: Span = old_heap.extend(0, old_heap.size()).below(HEAP_TOP_LIMIT);
if new_heap == old_heap {
// we won't be extending the heap, so we should return Err
return Err(());
}
unsafe {
// we're assuming the new memory up to HEAP_TOP_LIMIT is unused and allocatable
talc.oom_handler.heap = talc.extend(old_heap, new_heap);
}
Ok(())
}
}
"lock_api" (default): Provides the Talck locking wrapper type that implements GlobalAlloc."allocator" (default, requires nightly): Provides an Allocator trait implementation via Talck."nightly_api" (default, requires nightly): Provides the Span::from(*mut [T]) and Span::from_slice functions."counters": Talc will track heap and allocation metrics. Use Talc::get_counters to access them."allocator-api2": Talck will implement allocator_api2::alloc::Allocator if "allocator" is not active.Talc can be built on stable Rust by disabling "allocator" and "nightly_api". The MSRV is 1.67.1.
Disabling "nightly_api" disables Span::from(*mut [T]), Span::from(*const [T]), Span::from_const_slice and Span::from_slice.
This is a dlmalloc-style linked list allocator with boundary tagging and bucketing, aimed at general-purpose use cases. Allocation is O(n) worst case (but in practice its near-constant time, see microbenchmarks), while in-place reallocations and deallocations are O(1).
Additionally, the layout of chunk metadata is rearranged to allow for smaller minimum-size chunks to reduce memory overhead of small allocations. The minimum chunk size is 3 * usize, with a single usize being reserved per allocation. This is more efficient than dlmalloc and galloc, despite using a similar algorithm.
Update: All of this is currently in the works. No guarantees on when it will be done, but significant progress has been made.
pub use counters to ensure Counters
is publicly accessible when the "counters" feature is enabled.polarathene: Replace README relative links with fully-qualified links.
polarathene: Improve docs for stable_examples/examples/std_global_allocator.rs.
Improved docs for stable_examples/examples/stable_allocator_api.rs and stable_examples/examples/std_global_allocator.rs.
Deprecated the Span::from* function for converting from shared references and const pointers, as they make committing UB easy. These will be removed in v5.
Fixed up a bunch of warnings all over the project.
except to Span, which takes the set difference, potentially splitting the Span. Thanks bjorn3 for the suggestion!allocator-api2 which allows using the Allocator trait on stable via the allocator-api2 crate. Thanks jess-sol!Added an implementation for Display for the counters. Hopefully this makes your logs a bit prettier.
Added Frusa and RLSF to the benchmarks.
Changed random actions benchmark to measure over various allocation sizes.
Optimized reallocation to allows other allocation operations to occur while memcopy-ing if an in-place reallocation failed.
grow_in_place function that returns Err if growing the memory in-place isn't possible.Added Span::from_* and From<> functions for const pointers and shared references.
Span::from_const_array(addr_of!(MEMORY))Fix: Made Talck derive Debug again.
Contribution by Ken Hoover: add Talc arena-style allocation size and perf WASM benchmarks
wasm-size now uses wasm-opt, giving more realistic size differences for users of wasm-pack
Improved shell scripts
Overhauled microbenchmarks
test.sh for it"counters" feature. Access the data via talc.get_counters()wasm-bench.sh to run (requires wasm-pack and deno)wasm-size and wasm-size.shTalck's API to be more inline with Rust norms.
Talck now hides its internal structure (no more .0).Talck::talc() has been replaced by Talck::lock().Talck::new() and Talck::into_inner(self) have been added.TalckRef and implemented the Allocator trait on Talck directly. No need to call talck.allocator() anymore.AssumeUnlockable into talc::locking::AssumeUnlockableTalc::lock_assume_single_threaded, use .lock::<talc::locking::AssumeUnlockable>() if necessary.memory.grow during the allocator's use.Span::from(*mut [T]) and Span::from_slice) behind nightly_api.nightly_api feature is default-enabled
default-features = false may cause unexpected errors if the gated functions are used. Consider adding nightly_api or using another function.new_arena no longer exists (use new and then claim)init has been replaced with claimclaim, extend and truncate now return the new heap extentInitOnOom is now ClaimOnOom.usize at the bottom.To migrate from v2 to v3, keep in mind that you must keep track of the heaps if you want to resize them, by storing the returned Spans. Read claim, extend and truncate's documentation for all the details.
dlmalloc to the benchmarks.TalckWasm. Let me know what breaks ;)
Find more details here.
Tests are now passing on 32 bit targets.
Documentation fixes and improvements for various items.
Fixed using lock_api without allocator.
Experimental WASM support has been added via TalckWasm on WASM targets.
spin and switched to using lock_api (thanks Stefan Lankes)
talc.lock::<spin::Mutex<()>>() for example.Talc struct must not be moved, and removed the mov function.
ErrOnOom to do what it says on the tin. InitOnOom is similar but inits to the given span if completely uninitialized. Implement OomHandler on any struct to implement your own behaviour (the OOM handler state can be accessed from handle_oom via talc.oom_handler).Span and other changes to pass miri's Stacked Borrows checks.
buddy_alloc and removing simple_chunk_allocator.