# rlsf
This crate implements the TLSF (Two-Level Segregated Fit) dynamic memory allocation algorithm¹. Requires Rust 1.61.0 or later. - **Allocation and deallocation operations are guaranteed to complete in constant time.** TLSF is suitable for real-time applications. - **Fast and small.** You can have both. It was found to be smaller and faster² than most `no_std`-compatible allocator crates. - **Accepts any kinds of memory pools.** The low-level type [`Tlsf`](#tlsf-core-api) just divides any memory pools you provide (e.g., a `static` array) to serve allocation requests. The high-level type [`GlobalTlsf`](#globaltlsf-global-allocator) automatically acquires memory pages using standard methods on supported systems. - **This crate supports `#![no_std]`.** It can be used in bare-metal and RTOS-based applications. ¹ M. Masmano, I. Ripoll, A. Crespo and J. Real, "TLSF: a new dynamic memory allocator for real-time systems," *Proceedings. 16th Euromicro Conference on Real-Time Systems*, 2004. ECRTS 2004., Catania, Italy, 2004, pp. 79-88, doi: 10.1109/EMRTS.2004.1311009. ² Compiled for and measured on a STM32F401 microcontroller using FarCri.rs. ## Measured Performance ![The result of latency measurement on STM32F401 is shown here. rlsf: 260–320 cycles. buddy-alloc: 340–440 cycles. umm_malloc: 300–700 cycles. dlmalloc: 450–750 cycles. ](https://yvt.jp/files/programs/rlsf/time-cm4f-xf-3.svg) ![The result of code size measurement on WebAssembly is shown here. rlsf: 1267 bytes, rlsf + pool coalescing: 1584 bytes, wee_alloc: 1910 bytes, dlmalloc: 9613 bytes. ](https://yvt.jp/files/programs/rlsf/size-wasm-xf.svg) ## Drawbacks - **It does not support concurrent access.** A whole pool must be locked for allocation and deallocation. If you use a FIFO lock to protect the pool, the worst-case execution time will be `O(num_contending_threads)`. You should consider using a thread-caching memory allocator (e.g., TCMalloc, jemalloc) if achieving a maximal throughput in a highly concurrent environment is desired. - **Segregated freelists with constant-time lookup cause internal fragmentation proportional to free block sizes.** The `SLLEN` paramter allows for adjusting the trade-off between fewer freelists and lower fragmentation. - **No special handling for small allocations (one algorithm for all sizes).** This may lead to inefficiencies in allocation-heavy applications compared to modern scalable memory allocators, such as glibc and jemalloc. ## Examples ### `Tlsf`: Core API ```rust use rlsf::Tlsf; use std::{mem::MaybeUninit, alloc::Layout}; let mut pool = [MaybeUninit::uninit(); 65536]; // On 32-bit systems, the maximum block size is 16 << FLLEN = 65536 bytes. // The worst-case internal fragmentation is (16 << FLLEN) / SLLEN - 2 = 4094 bytes. // `'pool` represents the memory pool's lifetime (`pool` in this case). let mut tlsf: Tlsf<'_, u16, u16, 12, 16> = Tlsf::new(); // ^^ ^^ ^^ // | | | // 'pool | SLLEN // FLLEN tlsf.insert_free_block(&mut pool); unsafe { let mut ptr1 = tlsf.allocate(Layout::new::