| Crates.io | frd_pu |
| lib.rs | frd_pu |
| version | 0.2.6 |
| created_at | 2025-08-22 14:25:31.125677+00 |
| updated_at | 2025-09-09 12:27:09.068342+00 |
| description | A high-performance, zero-dependency library for creating efficient and fast applications. |
| homepage | |
| repository | https://github.com/3Rr0RHACK3R/frd-pu |
| max_upload_size | |
| id | 1806431 |
| size | 402,891 |
NEXT UPDATE: TOOLS FOR MAKING GAME ENGINES FASTER WITH RENDERING
YANKED UPDATE: THE GUI FRAME WORK WAS REALLY BIG AND DIDN'T FIT IN THE CRATES LIMIT WHICH WE COULD SAY IS SMALL .
ALSO FOR MY GUY: @DB33P , LEARN MEMORY MANAGEMENT
NOTICE! THIS IS THE FULL GUIDE PLEASE REPORT ISSUES AT https://github.com/3Rr0RHACK3R/frd-pu/issues , WE RECOMMEND IT!
FRD-PU: The Fast RAM Data-Processing Unit A professional-grade, high-performance, and zero-dependency library built from the ground up for extreme efficiency. This crate is designed to handle massive computational tasks and data streams with minimal resource consumption. It is ideal for creating hyper-fast applications without a monstrous hardware footprint.
Our core philosophy is simple: Do more with less. We achieve this through a unique blend of optimized algorithms and zero-copy data streaming, all built on a foundation that relies only on the Rust standard library. This empowers you to create dominant, high-performance applications that make bloated, resource-hogging software a thing of the past.
Core Features Absolute 0 Dependencies: We rely only on the Rust standard library, ensuring a tiny footprint and lightning-fast compilation.
Memory-First Design: The library's core is built to avoid unnecessary memory allocations, allowing you to process massive datasets with minimal memory impact.
Optimized Engines: We provide specialized APIs for different types of computation, ensuring the right tool for the job.
Modules & API Documentation This crate is composed of several powerful modules, each designed for a specific purpose.
bloom_filter A memory-efficient, probabilistic data structure for checking if an element is a member of a set. It is ideal for scenarios where memory is a constraint and a small rate of false positives is acceptable.
BloomFilter::new(capacity, false_positive_probability): Creates a new filter with a specified expected number of items and a desired false-positive rate.
BloomFilter::insert(&self, item): Inserts a hashable item into the filter.
BloomFilter::check(&self, item): Checks if an item may be in the set. Returns false if it is definitely not, and true if it is probably in the set.
btree A zero-dependency Binary Search Tree (BST) data structure. A BST is an efficient way to store and retrieve sorted data, providing logarithmic time complexity for search, insertion, and deletion operations on average.
BinarySearchTree::insert(&mut self, key, value): Inserts a key-value pair into the tree. Returns an error if the key already exists.
BinarySearchTree::search(&self, key): Searches for a key in the tree and returns a reference to its value, if found.
cache A high-performance, memory-aware, Least Recently Used (LRU) cache. This cache uses a combination of a hash map for fast lookups and a linked list for efficient access-order tracking, providing O(1) average time complexity for most operations. The cache's memory usage is managed by a max_size in bytes.
LruCache::new(max_size): Creates a new cache with a maximum size in bytes.
LruCache::insert(&mut self, key, value): Inserts a key-value pair into the cache. Returns an error if the item exceeds the cache's maximum size.
LruCache::get(&mut self, key): Retrieves a value by key, marking it as the most recently used.
LruCache::remove(&mut self, key): Removes a key-value pair from the cache.
concurrent A thread-safe list that allows for safe concurrent access from multiple threads. It wraps a standard Vec in a Mutex to provide exclusive access and uses Arc for shared ownership between threads.
ConcurrentList::new(): Creates a new, empty, thread-safe list.
ConcurrentList::push(&self, item): Appends an item to the end of the list.
ConcurrentList::pop(&self): Removes and returns the last item in the list.
ConcurrentList::get(&self, index): Returns a reference to the item at the specified index.
cpu_task A professional wrapper for a single-threaded CPU task and its input. This module provides a clean and encapsulated way to define and execute a sequential computational task, ensuring robust error handling.
new_cpu_task(input, task): Creates a new single-threaded task with a defined input and a closure for the work to be done.
CpuTask::execute(): Executes the defined CPU task and returns the result or an error.
data_stream An efficient API for handling large files and network streams in a chunked manner. This module abstracts over different data sources, allowing for low-memory processing of vast datasets.
new_file_stream(path, chunk_size): A convenience function to create a data stream from a file path.
new_network_stream(stream, chunk_size): A convenience function to create a data stream from a network stream.
DataStream::for_each_chunk(&mut self, processor): Reads the stream in chunks and processes each chunk with a provided closure.
hasher A zero-dependency, high-performance hashing engine. This module provides functions for hashing byte slices, files, and data streams using a fast, non-cryptographic DefaultHasher.
hash_bytes(data): Hashes a byte slice into a 64-bit integer.
hash_file(path): Hashes the contents of a file.
hash_stream(reader): Hashes data from any type that implements the Read trait.
parallel A powerful module for executing data-parallel tasks across multiple CPU cores. It chunks input data and processes each chunk on a separate thread, ensuring thread safety and graceful panic handling.
execute_parallel(input, workers, task): Executes a data-parallel task, distributing the work across a specified number of threads.
quicksort An insanely fast, zero-dependency, in-place sorting algorithm. This implementation of QuickSort sorts a mutable slice in place and is generic over any type that can be compared.
quicksort(slice): Sorts a mutable slice of data in place using the QuickSort algorithm.
trie A memory-efficient, zero-dependency Trie (Prefix Tree) data structure. A Trie is ideal for efficient retrieval of keys from a dataset of strings, making it perfect for applications like autocompletion and spell-checking.
Trie::insert(&mut self, word): Inserts a word into the Trie.
Trie::search(&self, word): Checks if a complete word exists in the Trie.
Trie::starts_with(&self, prefix): Checks if a prefix exists in the Trie.
Getting Started To use this crate in your project, add it to your Cargo.toml.
[dependencies] frd-pu = { path = "/path/to/your/frd-pu" }
Contributing We welcome contributions from the community. Please read the CONTRIBUTING.md for guidelines on how to submit pull requests, report bugs, and propose new features.
License This project is licensed under the MIT License.
---------------------------- Bloomfilter and other features ---------------------------------------------------------------
BloomFilter The BloomFilter module introduces a space-efficient, probabilistic data structure used to test whether an element is a member of a set. It offers a significant memory advantage over traditional hash sets but comes with the trade-off of a small chance of false positives. It guarantees no false negatives.
Key Features Memory Efficiency: Drastically reduces memory footprint for set membership tests.
Fast Operations: Provides extremely fast add and check operations.
Zero Dependencies: Relies only on the Rust standard library.
Example Usage The following example demonstrates how to create a Bloom filter and perform basic operations.
use frd_pu::bloom_filter::{BloomFilter, BloomFilterError};
fn main() -> Result<(), BloomFilterError> { // Create a new Bloom filter with a capacity of 1000 items and a 1% false positive probability. let mut filter = BloomFilter::new(1000, 0.01)?;
// Add items to the filter.
filter.add(&"professional");
filter.add(&"project");
filter.add(&"efficiency");
// Check for membership.
assert_eq!(filter.check(&"project"), true);
assert_eq!(filter.check(&"quality"), false); // May be true in rare cases due to false positives.
Ok(())
}
BinarySearchTree The BinarySearchTree module provides a professional-grade, in-memory data structure for storing and retrieving key-value pairs. It is designed for operations that require fast lookups, insertions, and deletions, maintaining a sorted structure for efficient searching.
Key Features Logarithmic Complexity: Provides O( logn) average time complexity for core operations.
Zero Dependencies: Implemented with only the Rust standard library.
Generic: Supports any key-value pair that implements the Ord trait for comparison.
Example Usage This example shows how to insert and search for elements within the Binary Search Tree.
use frd_pu::btree::{BinarySearchTree, BinarySearchTreeError};
fn main() -> Result<(), BinarySearchTreeError> { // Create a new Binary Search Tree. let mut bst = BinarySearchTree::new();
// Insert key-value pairs.
bst.insert(5, "Task B")?;
bst.insert(3, "Task A")?;
bst.insert(8, "Task C")?;
// Search for a value.
assert_eq!(bst.search(&3), Some(&"Task A"));
assert_eq!(bst.search(&10), None);
Ok(())
}
----------------------------- memory_pool and Compression modules ---------------------------------------------------------
The Fast RAM Data-Processing Unit
Source Code: https://github.com/3Rr0RHACK3R/frd-pu
The FRD-PU library provides two powerful modules designed for extreme performance and minimal resource consumption: the Memory Pool module for zero-allocation memory management and the Compression module for high-speed LZ77-style data compression. Both modules follow the core philosophy of "Do more with less" by eliminating external dependencies and optimizing for speed and memory efficiency.
The Memory Pool module eliminates runtime allocation overhead by pre-allocating memory blocks, making it perfect for high-frequency operations, real-time systems, and applications requiring predictable memory patterns. The Compression module provides fast LZ77-style compression with configurable parameters, enabling efficient data storage and transmission without external dependencies.
Both modules rely exclusively on the Rust standard library, ensuring minimal compile times, small binary sizes, and maximum compatibility. This approach eliminates version conflicts, reduces attack surface, and provides predictable behavior across different environments.
The Memory Pool module provides high-performance, zero-allocation memory management through pre-allocated pools. This eliminates malloc/free overhead and provides predictable memory usage patterns essential for real-time applications.
Memory pools work by allocating large blocks of memory upfront and dividing them into fixed-size chunks. When your application needs memory, it receives a pre-allocated chunk instantly. When finished, the memory returns to the pool for reuse, avoiding expensive system calls.
use frd_pu::{FixedPool, create_small_pool, MemoryPoolError};
// Create a pool with 1024-byte blocks, 100 blocks total
let pool = FixedPool::new(1024, 100)?;
// Allocate memory from the pool
let mut memory = pool.allocate()?;
// Use the memory
let data = b"Hello, high-performance world!";
memory.write_bytes(data)?;
// Memory automatically returns to pool when dropped
The module provides several pre-configured pools optimized for common use cases:
use frd_pu::{create_small_pool, create_medium_pool, create_large_pool};
// Small allocations: 64 bytes, 1000 blocks
let small_pool = create_small_pool()?;
// Medium allocations: 1KB, 500 blocks
let medium_pool = create_medium_pool()?;
// Large allocations: 64KB, 100 blocks
let large_pool = create_large_pool()?;
The PooledMemory type provides safe access to pool-allocated memory with automatic cleanup:
let pool = FixedPool::new(2048, 50)?;
let mut memory = pool.allocate()?;
// Zero out the memory block
memory.zero();
// Write data safely
let input_data = b"Critical real-time data processing";
memory.write_bytes(input_data)?;
// Read data back
let output_data = memory.read_bytes(input_data.len())?;
assert_eq!(input_data, output_data.as_slice());
// Get direct access to memory
let raw_slice = memory.as_mut_slice();
raw_slice[0] = 0xFF;
// Get memory size
println!("Block size: {} bytes", memory.size());
Monitor pool usage and performance with built-in statistics:
let pool = FixedPool::new(512, 200)?;
// Allocate some memory
let _mem1 = pool.allocate()?;
let _mem2 = pool.allocate()?;
let _mem3 = pool.allocate()?;
// Check pool statistics
let stats = pool.stats();
println!("Pool utilization: {:.1}%", stats.utilization_percentage());
println!("Available blocks: {}", stats.available_blocks);
println!("Used blocks: {}", stats.used_blocks);
println!("Total allocations: {}", stats.allocations);
println!("Total deallocations: {}", stats.deallocations);
// Check availability before allocating
if pool.has_available() {
let memory = pool.allocate()?;
// Use memory...
}
// Reset statistics while keeping memory allocated
pool.reset_stats();
For reusing complex objects, use the ObjectPool:
use std::collections::HashMap;
// Create a pool for HashMap objects
let pool = ObjectPool::new(
|| HashMap::<String, i32>::new(), // Factory function
|map| map.clear(), // Reset function
10 // Maximum pool size
);
// Get an object from the pool
let mut map_obj = pool.get();
map_obj.insert("key1".to_string(), 100);
map_obj.insert("key2".to_string(), 200);
// Object is automatically reset and returned to pool when dropped
drop(map_obj);
// Next allocation gets a clean object
let clean_map = pool.get();
assert_eq!(clean_map.len(), 0);
The PoolManager automatically selects the best pool for each allocation size:
let manager = PoolManager::new()?;
// Allocate different sizes - manager picks the best pool
let small_mem = manager.allocate(16)?; // Uses 32-byte pool
let medium_mem = manager.allocate(1000)?; // Uses 2KB pool
let large_mem = manager.allocate(10000)?; // Uses 32KB pool
// Check availability for specific size
if manager.has_available(500) {
let memory = manager.allocate(500)?;
// Use memory...
}
// Get statistics for all pools
let all_stats = manager.total_stats();
for (i, stats) in all_stats.iter().enumerate() {
println!("Pool {}: {}", i, stats);
}
The module defines specific error types for different failure conditions:
use frd_pu::MemoryPoolError;
match pool.allocate() {
Ok(memory) => {
// Use memory successfully
},
Err(MemoryPoolError::PoolExhausted) => {
// No more blocks available
println!("Pool is full, try again later");
},
Err(MemoryPoolError::InvalidBlockSize) => {
// Block size configuration error
println!("Invalid block size specified");
},
Err(e) => {
println!("Other error: {}", e);
}
}
All pool types are thread-safe and can be shared across threads:
use std::sync::Arc;
use std::thread;
let pool = Arc::new(FixedPool::new(1024, 1000)?);
let mut handles = vec![];
for i in 0..4 {
let pool_clone = Arc::clone(&pool);
let handle = thread::spawn(move || {
for _ in 0..100 {
if let Ok(mut memory) = pool_clone.allocate() {
// Simulate work
memory.write_bytes(&[i as u8; 64]).unwrap();
// Memory automatically returns to pool
}
}
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final stats: {}", pool.stats());
The Compression module provides high-performance LZ77-style compression with zero external dependencies. It features configurable sliding window compression optimized for both speed and compression ratio.
The compression engine uses LZ77 algorithm with a sliding window approach. It identifies repeated sequences in the data and replaces them with references to previous occurrences, achieving excellent compression ratios on repetitive data while maintaining fast compression and decompression speeds.
use frd_pu::{compress_data, decompress_data, CompressionError};
// Compress binary data
let original_data = b"Hello, World! Hello, World! This is a test of compression.";
let compressed = compress_data(original_data)?;
let decompressed = decompress_data(&compressed)?;
assert_eq!(original_data, decompressed.as_slice());
println!("Original size: {} bytes", original_data.len());
println!("Compressed size: {} bytes", compressed.len());
For text data, use the specialized text compression functions:
use frd_pu::{compress_text, decompress_to_text};
let text = "This is a long text document with repeated phrases. \
This is a long text document with repeated phrases. \
Compression works well on repetitive content.";
let compressed = compress_text(text)?;
let decompressed_text = decompress_to_text(&compressed)?;
assert_eq!(text, decompressed_text);
For fine-grained control, use the CompressionEngine directly:
use frd_pu::CompressionEngine;
// Create engine with default settings (32KB window)
let engine = CompressionEngine::new();
// Create engine with custom window size
let custom_engine = CompressionEngine::with_window_size(16384)?;
let data = b"Repeated data patterns compress very well when using LZ77 compression algorithms.";
let compressed = engine.compress(data)?;
let decompressed = engine.decompress(&compressed)?;
assert_eq!(data, decompressed.as_slice());
Analyze compression performance with detailed statistics:
use frd_pu::{get_compression_stats, CompressionEngine};
let data = b"This is sample data with repeated patterns. \
This is sample data with repeated patterns. \
This is sample data with repeated patterns.";
// Get detailed compression statistics
let stats = get_compression_stats(data)?;
println!("{}", stats);
// Estimate compression ratio before compressing
let engine = CompressionEngine::new();
let estimated_ratio = engine.estimate_compression_ratio(data)?;
println!("Estimated compression ratio: {:.2}", estimated_ratio);
The module provides specific error types for different compression failures:
use frd_pu::CompressionError;
match compress_data(data) {
Ok(compressed) => {
println!("Compression successful");
},
Err(CompressionError::EmptyInput) => {
println!("Cannot compress empty data");
},
Err(CompressionError::CompressionFailed) => {
println!("Internal compression error");
},
Err(e) => {
println!("Other error: {}", e);
}
}
Adjust the sliding window size to optimize for your specific use case:
// Small window for limited memory environments
let small_engine = CompressionEngine::with_window_size(4096)?;
// Large window for better compression on large datasets
let large_engine = CompressionEngine::with_window_size(32768)?;
let test_data = generate_test_data(100000); // Your data generation function
// Compare compression ratios
let small_compressed = small_engine.compress(&test_data)?;
let large_compressed = large_engine.compress(&test_data)?;
println!("Small window ratio: {:.3}", small_compressed.len() as f64 / test_data.len() as f64);
println!("Large window ratio: {:.3}", large_compressed.len() as f64 / test_data.len() as f64);
For processing large files or data streams, combine compression with the data_stream module:
use frd_pu::{CompressionEngine, new_file_stream};
let engine = CompressionEngine::new();
// Read file in chunks and compress
let mut file_stream = new_file_stream("large_dataset.bin")?;
let mut compressed_chunks = Vec::new();
while let Some(chunk) = file_stream.read_chunk(8192)? {
let compressed_chunk = engine.compress(&chunk)?;
compressed_chunks.push(compressed_chunk);
}
// Later, decompress all chunks
let mut decompressed_data = Vec::new();
for compressed_chunk in compressed_chunks {
let decompressed_chunk = engine.decompress(&compressed_chunk)?;
decompressed_data.extend_from_slice(&decompressed_chunk);
}
Add FRD-PU to your Cargo.toml:
[dependencies]
frd-pu = { git = "https://github.com/3Rr0RHACK3R/frd-pu" }
Import only the modules you need:
// For memory pool functionality only
use frd_pu::{FixedPool, PoolManager, create_medium_pool};
// For compression functionality only
use frd_pu::{compress_data, decompress_data, CompressionEngine};
// For both modules
use frd_pu::{
FixedPool, PoolManager,
compress_data, decompress_data,
MemoryPoolError, CompressionError
};
Replace standard allocations with pool allocations:
// Before: Standard allocation
let mut buffer = vec![0u8; 1024];
process_data(&mut buffer);
// After: Pool allocation
let pool = create_medium_pool()?;
let mut memory = pool.allocate()?;
let buffer = memory.as_mut_slice();
process_data(buffer);
Replace standard compression libraries:
// Before: External compression library
// extern crate some_compression_lib;
// let compressed = some_compression_lib::compress(data)?;
// After: FRD-PU compression
let compressed = compress_data(data)?;
Create application-specific pool configurations:
pub struct AppMemoryManager {
small_pool: FixedPool,
medium_pool: FixedPool,
large_pool: FixedPool,
compression_engine: CompressionEngine,
}
impl AppMemoryManager {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(AppMemoryManager {
small_pool: FixedPool::new(128, 2000)?,
medium_pool: FixedPool::new(4096, 500)?,
large_pool: FixedPool::new(65536, 50)?,
compression_engine: CompressionEngine::with_window_size(16384)?,
})
}
pub fn allocate_buffer(&self, size: usize) -> Result<PooledMemory, MemoryPoolError> {
if size <= 128 {
self.small_pool.allocate()
} else if size <= 4096 {
self.medium_pool.allocate()
} else {
self.large_pool.allocate()
}
}
pub fn compress_data(&self, data: &[u8]) -> Result<Vec<u8>, CompressionError> {
self.compression_engine.compress(data)
}
}
Choose appropriate pool sizes based on your allocation patterns:
// Analyze your allocation patterns first
fn analyze_allocations() -> (usize, usize, usize) {
// Return (average_size, peak_concurrent, total_allocations)
// Implement based on your profiling
(512, 100, 10000)
}
let (avg_size, peak_concurrent, _) = analyze_allocations();
// Size pools appropriately
let optimal_block_size = (avg_size + 63) & !63; // Round up to 64-byte boundary
let pool = FixedPool::new(optimal_block_size, peak_concurrent * 2)?;
Pre-allocate pools during application startup:
struct HighPerformanceApp {
network_pool: FixedPool,
processing_pool: FixedPool,
compression_engine: CompressionEngine,
}
impl HighPerformanceApp {
pub fn initialize() -> Result<Self, Box<dyn std::error::Error>> {
// Pre-allocate all pools during startup
let network_pool = FixedPool::new(1500, 1000)?; // MTU-sized buffers
let processing_pool = FixedPool::new(4096, 500)?; // Processing buffers
let compression_engine = CompressionEngine::new();
// Warm up pools by allocating and immediately freeing
for _ in 0..10 {
let _warm1 = network_pool.allocate()?;
let _warm2 = processing_pool.allocate()?;
}
Ok(HighPerformanceApp {
network_pool,
processing_pool,
compression_engine,
})
}
}
Choose window sizes based on your data characteristics:
// Small window for low memory usage and faster compression
let fast_engine = CompressionEngine::with_window_size(4096)?;
// Large window for better compression ratio on large files
let efficient_engine = CompressionEngine::with_window_size(32768)?;
// Benchmark different window sizes
let test_data = load_representative_data();
let mut results = Vec::new();
for window_size in [2048, 4096, 8192, 16384, 32768] {
let engine = CompressionEngine::with_window_size(window_size)?;
let start = std::time::Instant::now();
let compressed = engine.compress(&test_data)?;
let compression_time = start.elapsed();
let start = std::time::Instant::now();
let _ = engine.decompress(&compressed)?;
let decompression_time = start.elapsed();
let ratio = compressed.len() as f64 / test_data.len() as f64;
results.push((window_size, ratio, compression_time, decompression_time));
}
// Choose optimal window size based on results
Combine memory pools with compression for maximum performance:
pub struct OptimizedProcessor {
input_pool: FixedPool,
output_pool: FixedPool,
compression_engine: CompressionEngine,
}
impl OptimizedProcessor {
pub fn process_data_stream(&self, data: &[u8]) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
// Use pooled memory for processing
let mut input_buffer = self.input_pool.allocate()?;
let mut output_buffer = self.output_pool.allocate()?;
// Copy input data to pooled buffer
input_buffer.write_bytes(data)?;
// Process data in-place using pooled memory
let processed_data = self.process_in_buffer(input_buffer.as_mut_slice())?;
// Compress the processed data
let compressed = self.compression_engine.compress(processed_data)?;
Ok(compressed)
}
fn process_in_buffer(&self, buffer: &mut [u8]) -> Result<&[u8], Box<dyn std::error::Error>> {
// Implement your processing logic here
// Return the processed portion of the buffer
Ok(buffer)
}
}
Build high-performance pipelines combining both modules:
use std::sync::mpsc;
use std::thread;
use std::sync::Arc;
pub struct RealtimePipeline {
pool_manager: Arc<PoolManager>,
compression_engine: Arc<CompressionEngine>,
}
impl RealtimePipeline {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(RealtimePipeline {
pool_manager: Arc::new(PoolManager::new()?),
compression_engine: Arc::new(CompressionEngine::new()),
})
}
pub fn start_processing(&self) -> (mpsc::Sender<Vec<u8>>, mpsc::Receiver<Vec<u8>>) {
let (input_tx, input_rx) = mpsc::channel();
let (output_tx, output_rx) = mpsc::channel();
let pool_manager = Arc::clone(&self.pool_manager);
let compression_engine = Arc::clone(&self.compression_engine);
thread::spawn(move || {
while let Ok(data) = input_rx.recv() {
// Allocate memory from pool
if let Ok(mut memory) = pool_manager.allocate(data.len()) {
// Copy data to pooled memory
if memory.write_bytes(&data).is_ok() {
// Process data (simulate processing)
let processed_data = memory.as_slice();
// Compress processed data
if let Ok(compressed) = compression_engine.compress(processed_data) {
let _ = output_tx.send(compressed);
}
}
}
// Memory automatically returns to pool when dropped
}
});
(input_tx, output_rx)
}
}
Extend the memory pool system for specialized use cases:
pub struct RingBufferPool {
pools: Vec<FixedPool>,
current_pool: std::sync::atomic::AtomicUsize,
}
impl RingBufferPool {
pub fn new(pool_count: usize, block_size: usize, blocks_per_pool: usize)
-> Result<Self, MemoryPoolError> {
let mut pools = Vec::with_capacity(pool_count);
for _ in 0..pool_count {
pools.push(FixedPool::new(block_size, blocks_per_pool)?);
}
Ok(RingBufferPool {
pools,
current_pool: std::sync::atomic::AtomicUsize::new(0),
})
}
pub fn allocate_round_robin(&self) -> Result<PooledMemory, MemoryPoolError> {
use std::sync::atomic::Ordering;
let start_pool = self.current_pool.load(Ordering::Relaxed);
for i in 0..self.pools.len() {
let pool_index = (start_pool + i) % self.pools.len();
if let Ok(memory) = self.pools[pool_index].allocate() {
// Update current pool for next allocation
self.current_pool.store(
(pool_index + 1) % self.pools.len(),
Ordering::Relaxed
);
return Ok(memory);
}
}
Err(MemoryPoolError::PoolExhausted)
}
}
Implement adaptive compression that chooses parameters based on data characteristics:
pub struct AdaptiveCompressor {
engines: Vec<CompressionEngine>,
thresholds: Vec<usize>,
}
impl AdaptiveCompressor {
pub fn new() -> Result<Self, CompressionError> {
let engines = vec![
CompressionEngine::with_window_size(2048)?, // Fast, less memory
CompressionEngine::with_window_size(8192)?, // Balanced
CompressionEngine::with_window_size(32768)?, // Best compression
];
let thresholds = vec![1024, 8192, usize::MAX];
Ok(AdaptiveCompressor { engines, thresholds })
}
pub fn compress_adaptive(&self, data: &[u8]) -> Result<Vec<u8>, CompressionError> {
// Choose engine based on data size
let engine_index = self.thresholds
.iter()
.position(|&threshold| data.len() <= threshold)
.unwrap_or(self.engines.len() - 1);
self.engines[engine_index].compress(data)
}
pub fn compress_best_ratio(&self, data: &[u8]) -> Result<Vec<u8>, CompressionError> {
// Try all engines and pick the one with best compression ratio
let mut best_result = None;
let mut best_ratio = f64::INFINITY;
for engine in &self.engines {
if let Ok(compressed) = engine.compress(data) {
let ratio = compressed.len() as f64 / data.len() as f64;
if ratio < best_ratio {
best_ratio = ratio;
best_result = Some(compressed);
}
}
}
best_result.ok_or(CompressionError::CompressionFailed)
}
}
Combine memory pools with memory-mapped files for processing large datasets:
use std::fs::File;
use std::io::{self, Read, Write};
pub struct LargeFileProcessor {
pool: FixedPool,
compression_engine: CompressionEngine,
}
impl LargeFileProcessor {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(LargeFileProcessor {
pool: FixedPool::new(1024 * 1024, 10)?, // 1MB blocks
compression_engine: CompressionEngine::new(),
})
}
pub fn process_file_chunked(&self, input_path: &str, output_path: &str, chunk_size: usize)
-> Result<(), Box<dyn std::error::Error>> {
let mut input_file = File::open(input_path)?;
let mut output_file = File::create(output_path)?;
loop {
// Allocate buffer from pool
let mut buffer = self.pool.allocate()?;
let chunk_buffer = &mut buffer.as_mut_slice()[..chunk_size.min(buffer.size())];
// Read chunk from file
let bytes_read = input_file.read(chunk_buffer)?;
if bytes_read == 0 {
break; // End of file
}
// Process the chunk
let processed_chunk = &chunk_buffer[..bytes_read];
let compressed_chunk = self.compression_engine.compress(processed_chunk)?;
// Write compressed chunk to output
output_file.write_all(&compressed_chunk)?;
// Buffer automatically returns to pool when dropped
}
Ok(())
}
}
Both modules provide detailed error types for different failure conditions:
use frd_pu::{MemoryPoolError, CompressionError};
pub enum ProcessingError {
MemoryPool(MemoryPoolError),
Compression(CompressionError),
Io(std::io::Error),
Custom(String),
}
impl From<MemoryPoolError> for ProcessingError {
fn from(err: MemoryPoolError) -> Self {
ProcessingError::MemoryPool(err)
}
}
impl From<CompressionError> for ProcessingError {
fn from(err: CompressionError) -> Self {
ProcessingError::Compression(err)
}
}
impl From<std::io::Error> for ProcessingError {
fn from(err: std::io::Error) -> Self {
ProcessingError::Io(err)
}
}
pub fn robust_data_processing(data: &[u8]) -> Result<Vec<u8>, ProcessingError> {
// Create resources with proper error handling
let pool = FixedPool::new(4096, 100)
.map_err(ProcessingError::from)?;
let engine = CompressionEngine::with_window_size(16384)
.map_err(ProcessingError::from)?;
// Allocate memory with fallback
let mut memory = match pool.allocate() {
Ok(mem) => mem,
Err(MemoryPoolError::PoolExhausted) => {
// Fallback: use heap allocation
return engine.compress(data).map_err(ProcessingError::from);
}
Err(e) => return Err(ProcessingError::from(e)),
};
// Process data with error recovery
if let Err(e) = memory.write_bytes(data) {
match e {
MemoryPoolError::InvalidBlockSize => {
// Data too large for pool, use direct compression
return engine.compress(data).map_err(ProcessingError::from);
}
_ => return Err(ProcessingError::from(e)),
}
}
// Compress with retry logic
let compressed = match engine.compress(memory.as_slice()) {
Ok(result) => result,
Err(CompressionError::CompressionFailed) => {
// Retry with different engine settings
let fallback_engine = CompressionEngine::with_window_size(8192)
.map_err(ProcessingError::from)?;
fallback_engine.compress(memory.as_slice())
.map_err(ProcessingError::from)?
}
Err(e) => return Err(ProcessingError::from(e)),
};
Ok(compressed)
}
Implement systems that degrade gracefully under resource pressure:
pub struct ResilientProcessor {
primary_pool: Option<FixedPool>,
fallback_pool: Option<FixedPool>,
compression_engine: CompressionEngine,
}
impl ResilientProcessor {
pub fn new() -> Self {
let primary_pool = FixedPool::new(4096, 1000).ok();
let fallback_pool = FixedPool::new(1024, 2000).ok();
let compression_engine = CompressionEngine::new();
ResilientProcessor {
primary_pool,
fallback_pool,
compression_engine,
}
}
pub fn process_with_fallback(&self, data: &[u8]) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
// Try primary pool first
if let Some(ref pool) = self.primary_pool {
if let Ok(mut memory) = pool.allocate() {
if memory.write_bytes(data).is_ok() {
return self.compression_engine.compress(memory.as_slice())
.map_err(|e| Box::new(e) as Box<dyn std::error::Error>);
}
}
}
// Fallback to secondary pool
if let Some(ref pool) = self.fallback_pool {
if let Ok(mut memory) = pool.allocate() {
// Process data in smaller chunks if needed
if data.len() <= memory.size() {
memory.write_bytes(data)?;
return self.compression_engine.compress(memory.as_slice())
.map_err(|e| Box::new(e) as Box<dyn std::error::Error>);
}
}
}
// Final fallback: direct heap allocation
self.compression_engine.compress(data)
.map_err(|e| Box::new(e) as Box<dyn std::error::Error>)
}
}
Implement monitoring systems to track resource usage and trigger recovery:
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
pub struct ResourceMonitor {
memory_pressure: AtomicUsize,
compression_failures: AtomicUsize,
total_operations: AtomicUsize,
}
impl ResourceMonitor {
pub fn new() -> Arc<Self> {
Arc::new(ResourceMonitor {
memory_pressure: AtomicUsize::new(0),
compression_failures: AtomicUsize::new(0),
total_operations: AtomicUsize::new(0),
})
}
pub fn record_memory_pressure(&self) {
self.memory_pressure.fetch_add(1, Ordering::Relaxed);
}
pub fn record_compression_failure(&self) {
self.compression_failures.fetch_add(1, Ordering::Relaxed);
}
pub fn record_operation(&self) {
self.total_operations.fetch_add(1, Ordering::Relaxed);
}
pub fn get_failure_rate(&self) -> f64 {
let total = self.total_operations.load(Ordering::Relaxed);
if total == 0 {
return 0.0;
}
let failures = self.compression_failures.load(Ordering::Relaxed);
failures as f64 / total as f64
}
pub fn should_trigger_recovery(&self) -> bool {
let pressure = self.memory_pressure.load(Ordering::Relaxed);
let failure_rate = self.get_failure_rate();
pressure > 100 || failure_rate > 0.1
}
}
pub struct MonitoredProcessor {
pool: FixedPool,
engine: CompressionEngine,
monitor: Arc<ResourceMonitor>,
}
impl MonitoredProcessor {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(MonitoredProcessor {
pool: FixedPool::new(2048, 500)?,
engine: CompressionEngine::new(),
monitor: ResourceMonitor::new(),
})
}
pub fn process_with_monitoring(&self, data: &[u8]) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
self.monitor.record_operation();
// Check if recovery is needed
if self.monitor.should_trigger_recovery() {
self.trigger_recovery();
}
// Attempt normal processing
match self.pool.allocate() {
Ok(mut memory) => {
if let Err(_) = memory.write_bytes(data) {
self.monitor.record_memory_pressure();
return self.fallback_process(data);
}
match self.engine.compress(memory.as_slice()) {
Ok(result) => Ok(result),
Err(_) => {
self.monitor.record_compression_failure();
self.fallback_process(data)
}
}
}
Err(_) => {
self.monitor.record_memory_pressure();
self.fallback_process(data)
}
}
}
fn fallback_process(&self, data: &[u8]) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
// Simple fallback: direct compression
self.engine.compress(data)
.map_err(|e| Box::new(e) as Box<dyn std::error::Error>)
}
fn trigger_recovery(&self) {
// Reset statistics and perform cleanup
self.monitor.memory_pressure.store(0, Ordering::Relaxed);
self.monitor.compression_failures.store(0, Ordering::Relaxed);
// Additional recovery actions could be implemented here
// such as forcing garbage collection, clearing caches, etc.
}
}
Complete example of using memory pools in a web server context:
use std::sync::Arc;
use std::thread;
use std::time::Duration;
pub struct WebServerBufferManager {
request_pool: FixedPool, // Small buffers for HTTP headers
response_pool: FixedPool, // Medium buffers for response data
file_pool: FixedPool, // Large buffers for file operations
compression_engine: CompressionEngine,
}
impl WebServerBufferManager {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(WebServerBufferManager {
request_pool: FixedPool::new(4096, 2000)?, // 4KB x 2000 = 8MB
response_pool: FixedPool::new(16384, 1000)?, // 16KB x 1000 = 16MB
file_pool: FixedPool::new(65536, 200)?, // 64KB x 200 = 12.8MB
compression_engine: CompressionEngine::with_window_size(16384)?,
})
}
pub fn handle_request(&self, request_data: &[u8]) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
// Parse request using request buffer
let mut request_buffer = self.request_pool.allocate()?;
request_buffer.write_bytes(request_data)?;
let parsed_request = self.parse_http_request(request_buffer.as_slice())?;
// Generate response using response buffer
let mut response_buffer = self.response_pool.allocate()?;
let response_data = self.generate_response(&parsed_request)?;
response_buffer.write_bytes(&response_data)?;
// Compress response if beneficial
let final_response = if response_data.len() > 1024 {
self.compression_engine.compress(response_buffer.as_slice())?
} else {
response_data
};
Ok(final_response)
}
pub fn serve_file(&self, file_path: &str) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
use std::fs::File;
use std::io::Read;
let mut file = File::open(file_path)?;
let mut file_buffer = self.file_pool.allocate()?;
// Read file in chunks
let mut all_data = Vec::new();
loop {
let bytes_read = file.read(file_buffer.as_mut_slice())?;
if bytes_read == 0 {
break;
}
// Compress each chunk
let chunk_data = &file_buffer.as_slice()[..bytes_read];
let compressed_chunk = self.compression_engine.compress(chunk_data)?;
all_data.extend_from_slice(&compressed_chunk);
}
Ok(all_data)
}
fn parse_http_request(&self, data: &[u8]) -> Result<HttpRequest, Box<dyn std::error::Error>> {
// Implement HTTP request parsing
Ok(HttpRequest::new(data))
}
fn generate_response(&self, request: &HttpRequest) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
// Implement HTTP response generation
Ok(b"HTTP/1.1 200 OK\r\nContent-Length: 13\r\n\r\nHello, World!".to_vec())
}
pub fn get_pool_statistics(&self) -> String {
format!(
"Request Pool: {}\nResponse Pool: {}\nFile Pool: {}",
self.request_pool.stats(),
self.response_pool.stats(),
self.file_pool.stats()
)
}
}
// Helper struct for HTTP request representation
struct HttpRequest {
data: Vec<u8>,
}
impl HttpRequest {
fn new(data: &[u8]) -> Self {
HttpRequest {
data: data.to_vec(),
}
}
}
// Usage example
fn run_web_server_simulation() -> Result<(), Box<dyn std::error::Error>> {
let manager = Arc::new(WebServerBufferManager::new()?);
// Simulate concurrent requests
let mut handles = vec![];
for i in 0..10 {
let manager_clone = Arc::clone(&manager);
let handle = thread::spawn(move || {
let request_data = format!("GET /page{} HTTP/1.1\r\nHost: example.com\r\n\r\n", i);
for _ in 0..100 {
if let Ok(_response) = manager_clone.handle_request(request_data.as_bytes()) {
// Process response
}
thread::sleep(Duration::from_millis(10));
}
});
handles.push(handle);
}
// Wait for all threads to complete
for handle in handles {
handle.join().unwrap();
}
// Print final statistics
println!("Final Statistics:\n{}", manager.get_pool_statistics());
Ok(())
}
Complete example of a data processing pipeline using both modules:
use std::sync::mpsc;
use std::thread;
use std::time::Instant;
pub struct DataPipeline {
input_pool: FixedPool,
processing_pool: FixedPool,
output_pool: FixedPool,
compression_engine: CompressionEngine,
}
impl DataPipeline {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(DataPipeline {
input_pool: FixedPool::new(8192, 500)?,
processing_pool: FixedPool::new(16384, 250)?,
output_pool: FixedPool::new(32768, 100)?,
compression_engine: CompressionEngine::with_window_size(32768)?,
})
}
pub fn start_pipeline(&self) -> PipelineHandles {
let (input_tx, input_rx) = mpsc::channel::<Vec<u8>>();
let (processed_tx, processed_rx) = mpsc::channel::<Vec<u8>>();
let (compressed_tx, compressed_rx) = mpsc::channel::<Vec<u8>>();
// Stage 1: Input processing
let input_pool = self.input_pool.clone_pools();
let processing_pool = self.processing_pool.clone_pools();
let processed_tx_clone = processed_tx.clone();
let input_handle = thread::spawn(move || {
while let Ok(data) = input_rx.recv() {
if let Ok(mut input_buffer) = input_pool.allocate() {
if input_buffer.write_bytes(&data).is_ok() {
// Simulate input processing
let processed_data = Self::process_input_stage(input_buffer.as_slice());
if let Ok(mut processing_buffer) = processing_pool.allocate() {
if processing_buffer.write_bytes(&processed_data).is_ok() {
let _ = processed_tx_clone.send(processed_data);
}
}
}
}
}
});
// Stage 2: Main processing
let processing_pool = self.processing_pool.clone_pools();
let output_pool = self.output_pool.clone_pools();
let compressed_tx_clone = compressed_tx.clone();
let processing_handle = thread::spawn(move || {
while let Ok(data) = processed_rx.recv() {
if let Ok(mut processing_buffer) = processing_pool.allocate() {
if processing_buffer.write_bytes(&data).is_ok() {
let processed_data = Self::process_main_stage(processing_buffer.as_mut_slice());
if let Ok(mut output_buffer) = output_pool.allocate() {
if output_buffer.write_bytes(&processed_data).is_ok() {
let _ = compressed_tx_clone.send(processed_data);
}
}
}
}
}
});
// Stage 3: Compression
let compression_engine = CompressionEngine::new();
let final_tx = compressed_tx.clone();
let compression_handle = thread::spawn(move || {
while let Ok(data) = compressed_rx.recv() {
if let Ok(compressed_data) = compression_engine.compress(&data) {
// In a real application, you would send this to final output
println!("Compressed {} bytes to {} bytes", data.len(), compressed_data.len());
}
}
});
PipelineHandles {
input_sender: input_tx,
input_handle,
processing_handle,
compression_handle,
}
}
fn process_input_stage(data: &[u8]) -> Vec<u8> {
// Simulate input validation and normalization
data.iter().map(|&b| b.wrapping_add(1)).collect()
}
fn process_main_stage(data: &mut [u8]) -> Vec<u8> {
// Simulate main processing logic
for byte in data.iter_mut() {
*byte = (*byte).wrapping_mul(2);
}
data.to_vec()
}
}
// Helper trait for cloning pools (simplified for example)
trait ClonePools {
fn clone_pools(&self) -> FixedPool;
}
impl ClonePools for FixedPool {
fn clone_pools(&self) -> FixedPool {
// In a real implementation, you might use Arc or similar
// For this example, create a new pool with same settings
FixedPool::new(self.block_size(), 100).unwrap()
}
}
pub struct PipelineHandles {
pub input_sender: mpsc::Sender<Vec<u8>>,
pub input_handle: thread::JoinHandle<()>,
pub processing_handle: thread::JoinHandle<()>,
pub compression_handle: thread::JoinHandle<()>,
}
impl PipelineHandles {
pub fn shutdown(self) {
drop(self.input_sender);
let _ = self.input_handle.join();
let _ = self.processing_handle.join();
let _ = self.compression_handle.join();
}
}
// Usage example
fn run_pipeline_example() -> Result<(), Box<dyn std::error::Error>> {
let pipeline = DataPipeline::new()?;
let handles = pipeline.start_pipeline();
// Send test data
for i in 0..1000 {
let test_data = format!("Test data packet {}: {}", i, "x".repeat(100)).into_bytes();
if handles.input_sender.send(test_data).is_err() {
break;
}
if i % 100 == 0 {
thread::sleep(Duration::from_millis(10));
}
}
// Shutdown pipeline
handles.shutdown();
println!("Pipeline processing completed");
Ok(())
}
Complete example for game engine-style memory management:
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
pub struct GameEngineMemoryManager {
entity_pool: ObjectPool<GameEntity>,
component_pools: HashMap<String, FixedPool>,
render_pool: FixedPool,
audio_pool: FixedPool,
network_pool: FixedPool,
asset_compression: CompressionEngine,
}
impl GameEngineMemoryManager {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
// Create object pool for game entities
let entity_pool = ObjectPool::new(
|| GameEntity::new(),
|entity| entity.reset(),
10000 // Support up to 10,000 active entities
);
// Create component pools
let mut component_pools = HashMap::new();
component_pools.insert("Transform".to_string(), FixedPool::new(64, 10000)?);
component_pools.insert("Render".to_string(), FixedPool::new(128, 5000)?);
component_pools.insert("Physics".to_string(), FixedPool::new(96, 3000)?);
component_pools.insert("Audio".to_string(), FixedPool::new(48, 1000)?);
Ok(GameEngineMemoryManager {
entity_pool,
component_pools,
render_pool: FixedPool::new(4096, 1000)?, // Render commands
audio_pool: FixedPool::new(2048, 500)?, // Audio buffers
network_pool: FixedPool::new(1500, 200)?, // Network packets
asset_compression: CompressionEngine::with_window_size(32768)?,
})
}
pub fn create_entity(&self) -> PooledObject<GameEntity> {
self.entity_pool.get()
}
pub fn allocate_component(&self, component_type: &str) -> Result<PooledMemory, MemoryPoolError> {
self.component_pools
.get(component_type)
.ok_or(MemoryPoolError::InvalidBlockSize)?
.allocate()
}
pub fn allocate_render_buffer(&self) -> Result<PooledMemory, MemoryPoolError> {
self.render_pool.allocate()
}
pub fn allocate_audio_buffer(&self) -> Result<PooledMemory, MemoryPoolError> {
self.audio_pool.allocate()
}
pub fn allocate_network_buffer(&self) -> Result<PooledMemory, MemoryPoolError> {
self.network_pool.allocate()
}
pub fn compress_asset(&self, asset_data: &[u8]) -> Result<Vec<u8>, CompressionError> {
self.asset_compression.compress(asset_data)
}
pub fn decompress_asset(&self, compressed_data: &[u8]) -> Result<Vec<u8>, CompressionError> {
self.asset_compression.decompress(compressed_data)
}
pub fn get_memory_stats(&self) -> GameMemoryStats {
let entity_count = self.entity_pool.available_count();
let mut component_stats = HashMap::new();
for (name, pool) in &self.component_pools {
component_stats.insert(name.clone(), pool.stats());
}
GameMemoryStats {
available_entities: entity_count,
component_stats,
render_pool_stats: self.render_pool.stats(),
audio_pool_stats: self.audio_pool.stats(),
network_pool_stats: self.network_pool.stats(),
}
}
pub fn frame_cleanup(&self) {
// Perform end-of-frame cleanup
// In a real implementation, you might trigger garbage collection
// or perform other cleanup tasks
}
}
// Game entity structure
pub struct GameEntity {
id: u32,
active: bool,
components: Vec<String>,
}
impl GameEntity {
pub fn new() -> Self {
GameEntity {
id: 0,
active: false,
components: Vec::new(),
}
}
pub fn reset(&mut self) {
self.id = 0;
self.active = false;
self.components.clear();
}
pub fn set_active(&mut self, active: bool) {
self.active = active;
}
pub fn add_component(&mut self, component_type: String) {
self.components.push(component_type);
}
}
// Memory statistics structure
pub struct GameMemoryStats {
pub available_entities: usize,
pub component_stats: HashMap<String, PoolStats>,
pub render_pool_stats: PoolStats,
pub audio_pool_stats: PoolStats,
pub network_pool_stats: PoolStats,
}
impl GameMemoryStats {
pub fn print_summary(&self) {
println!("Game Memory Statistics:");
println!("Available entities: {}", self.available_entities);
for (component_type, stats) in &self.component_stats {
println!("{} components: {}", component_type, stats);
}
println!("Render pool: {}", self.render_pool_stats);
println!("Audio pool: {}", self.audio_pool_stats);
println!("Network pool: {}", self.network_pool_stats);
}
}
// Game system that uses the memory manager
pub struct GameSystem {
memory_manager: Arc<GameEngineMemoryManager>,
active_entities: Mutex<Vec<PooledObject<GameEntity>>>,
}
impl GameSystem {
pub fn new() -> Result<Self, Box<dyn std::error::Error>> {
Ok(GameSystem {
memory_manager: Arc::new(GameEngineMemoryManager::new()?),
active_entities: Mutex::new(Vec::new()),
})
}
pub fn spawn_entity(&self, entity_type: &str) -> Result<(), Box<dyn std::error::Error>> {
let mut entity = self.memory_manager.create_entity();
entity.set_active(true);
// Add components based on entity type
match entity_type {
"Player" => {
entity.add_component("Transform".to_string());
entity.add_component("Render".to_string());
entity.add_component("Physics".to_string());
// Allocate component data
let _transform = self.memory_manager.allocate_component("Transform")?;
let _render = self.memory_manager.allocate_component("Render")?;
let _physics = self.memory_manager.allocate_component("Physics")?;
}
"Enemy" => {
entity.add_component("Transform".to_string());
entity.add_component("Render".to_string());
entity.add_component("Physics".to_string());
entity.add_component("Audio".to_string());
let _transform = self.memory_manager.allocate_component("Transform")?;
let _render = self.memory_manager.allocate_component("Render")?;
let _physics = self.memory_manager.allocate_component("Physics")?;
let _audio = self.memory_manager.allocate_component("Audio")?;
}
_ => {}
}
// Store entity in active list
let mut entities = self.active_entities.lock().unwrap();
entities.push(entity);
Ok(())
}
pub fn update_frame(&self) -> Result<(), Box<dyn std::error::Error>> {
// Render system
let mut render_buffer = self.memory_manager.allocate_render_buffer()?;
self.build_render_commands(render_buffer.as_mut_slice())?;
// Audio system
if let Ok(mut audio_buffer) = self.memory_manager.allocate_audio_buffer() {
self.process_audio(audio_buffer.as_mut_slice())?;
}
// Network system
if let Ok(mut network_buffer) = self.memory_manager.allocate_network_buffer() {
self.process_network_messages(network_buffer.as_mut_slice())?;
}
// Cleanup
self.memory_manager.frame_cleanup();
Ok(())
}
fn build_render_commands(&self, buffer: &mut [u8]) -> Result<(), Box<dyn std::error::Error>> {
// Simulate building render commands
buffer.fill(0x42);
Ok(())
}
fn process_audio(&self, buffer: &mut [u8]) -> Result<(), Box<dyn std::error::Error>> {
// Simulate audio processing
buffer.fill(0x80);
Ok(())
}
fn process_network_messages(&self, buffer: &mut [u8]) -> Result<(), Box<dyn std::error::Error>> {
// Simulate network message processing
buffer.fill(0xFF);
Ok(())
}
pub fn load_compressed_asset(&self, asset_path: &str) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
use std::fs;
// Load compressed asset file
let compressed_data = fs::read(asset_path)?;
// Decompress using the memory manager's compression engine
let decompressed_data = self.memory_manager.decompress_asset(&compressed_data)?;
Ok(decompressed_data)
}
pub fn save_compressed_asset(&self, data: &[u8], asset_path: &str) -> Result<(), Box<dyn std::error::Error>> {
use std::fs;
// Compress the asset data
let compressed_data = self.memory_manager.compress_asset(data)?;
// Save to file
fs::write(asset_path, compressed_data)?;
Ok(())
}
pub fn print_memory_stats(&self) {
let stats = self.memory_manager.get_memory_stats();
stats.print_summary();
}
}
// Usage example
fn run_game_simulation() -> Result<(), Box<dyn std::error::Error>> {
let game_system = GameSystem::new()?;
// Spawn some entities
for i in 0..1000 {
let entity_type = if i % 3 == 0 { "Player" } else { "Enemy" };
game_system.spawn_entity(entity_type)?;
}
// Simulate game loop
for frame in 0..1000 {
game_system.update_frame()?;
if frame % 100 == 0 {
println!("Frame {}: ", frame);
game_system.print_memory_stats();
}
}
// Test asset compression
let test_asset = b"This is a test game asset with repetitive data patterns that should compress well.".repeat(100);
game_system.save_compressed_asset(&test_asset, "test_asset.compressed")?;
let loaded_asset = game_system.load_compressed_asset("test_asset.compressed")?;
assert_eq!(test_asset, loaded_asset);
println!("Asset compression test passed!");
Ok(())
}
This comprehensive documentation provides a complete guide to using the FRD-PU Memory Pool and Compression modules. The examples demonstrate real-world usage patterns for high-performance applications, from web servers to game engines. The zero-dependency philosophy ensures maximum compatibility and minimal overhead, while the detailed error handling patterns provide robust applications that can gracefully handle resource pressure and failure conditions.
The memory pool module excels in scenarios requiring predictable memory allocation patterns, while the compression module provides efficient data storage and transmission without external dependencies. Together, they form a powerful foundation for building high-performance applications that truly embody the "Do more with less" philosophy.
We Would Love You contributing on Our github here
https://github.com/3Rr0RHACK3R/frd-pu
Join Us On Our Journey to Make a Great Library!
-------------------------- buffer pool module -----------------------------
The Buffer Pool module provides a high-performance, zero-dependency buffer pool implementation designed for extreme efficiency in data processing and streaming applications. This module eliminates constant allocation and deallocation cycles by recycling buffers, making it essential for high-throughput I/O operations and memory-sensitive applications.
The buffer pool system maintains multiple pools for different buffer sizes and automatically manages memory lifecycle, providing significant performance improvements for applications that frequently allocate and deallocate byte buffers.
Built on the same "do more with less" philosophy as the rest of FRD-PU, the Buffer Pool module achieves maximum efficiency through:
Zero-Allocation Reuse: Buffers are recycled instead of being constantly allocated and deallocated, eliminating the performance overhead of memory management in tight loops.
Multiple Pool Sizes: The system maintains separate pools for different buffer sizes, ensuring optimal memory utilization and preventing fragmentation.
Thread-Safe Operations: All buffer pool operations are thread-safe using efficient locking mechanisms, making them suitable for concurrent applications.
Automatic Growth and Shrinking: Pools can grow automatically when under pressure and shrink when buffers are no longer needed, balancing performance with memory efficiency.
Global Convenience Pools: Pre-configured pools for common buffer sizes (1KB, 64KB, 1MB) provide instant access without manual pool management.
Comprehensive Statistics: Built-in monitoring capabilities track buffer usage, allocation patterns, and performance metrics for production optimization.
The buffer pool system consists of three main components:
BufferPool: The main pool structure that manages collections of buffers for different sizes. Each pool maintains its own statistics and configuration.
PooledBuffer: A smart buffer wrapper that automatically returns buffers to their originating pool when dropped. This ensures automatic memory management without manual intervention.
Global Pools: Thread-safe singleton pools for common buffer sizes, providing convenient access without explicit pool creation.
use frd_pu::BufferPool;
// Create a pool with default settings (64 buffers per size)
let pool = BufferPool::new();
// Create a pool with custom maximum buffers per size
let large_pool = BufferPool::with_max_buffers(128);
// Get a buffer from the pool
let mut buffer = pool.get_buffer(8192)?;
// Use the buffer for I/O operations
buffer.as_mut_slice()[0] = 42;
let data = buffer.as_slice();
// Buffer automatically returns to pool when dropped
use frd_pu::{get_small_buffer, get_medium_buffer, get_large_buffer, get_buffer};
// Get pre-sized buffers from global pools
let small = get_small_buffer()?; // 1KB buffer
let medium = get_medium_buffer()?; // 64KB buffer
let large = get_large_buffer()?; // 1MB buffer
// Get automatically sized buffer
let buffer = get_buffer(2048)?; // Uses appropriate pool
// Get a buffer and perform operations
let mut buffer = pool.get_buffer(4096)?;
// Access as mutable slice
let slice = buffer.as_mut_slice();
slice[0] = 100;
// Resize buffer contents
buffer.resize(2048);
// Clear buffer contents
buffer.clear();
// Check buffer properties
println!("Capacity: {}", buffer.capacity());
println!("Length: {}", buffer.len());
println!("Is empty: {}", buffer.is_empty());
// Get detailed pool statistics
let stats = pool.stats()?;
println!("Total allocated: {}", stats.total_allocated);
println!("Buffers in use: {}", stats.buffers_in_use);
println!("Available buffers: {}", stats.available_buffers);
println!("Peak allocation: {}", stats.peak_allocation);
println!("Total acquisitions: {}", stats.total_acquisitions);
println!("Total releases: {}", stats.total_releases);
// Monitor global pools
let (small_stats, medium_stats, large_stats) = get_global_stats()?;
// Clear all buffers from pool (forces deallocation)
pool.clear()?;
// Shrink pool to remove excess buffers
pool.shrink()?;
// Clear all global pools
clear_global_pools()?;
The buffer pool integrates seamlessly with standard Rust I/O operations:
use std::fs::File;
use std::io::Read;
// Use pooled buffer for file I/O
let mut buffer = get_medium_buffer()?;
let mut file = File::open("large_file.dat")?;
// Read directly into pooled buffer
let bytes_read = file.read(buffer.as_mut_slice())?;
buffer.resize(bytes_read);
// Process the data
process_data(buffer.as_slice());
// Buffer automatically returns to pool when dropped
Buffer Size Selection: Choose buffer sizes that match your I/O patterns. Larger buffers reduce system call overhead but consume more memory.
Pool Size Configuration: Set maximum pool sizes based on expected concurrency. Higher limits provide better performance under load but consume more memory.
Buffer Reuse Patterns: Buffers work best when allocation and deallocation happen in similar patterns. Highly variable buffer sizes may reduce reuse effectiveness.
Thread Safety Overhead: While pools are thread-safe, high contention scenarios may benefit from thread-local pools or buffer pre-allocation strategies.
The buffer pool system uses comprehensive error handling for robust operation:
use frd_pu::{BufferPool, BufferPoolError};
match pool.get_buffer(0) {
Ok(buffer) => {
// Use buffer
},
Err(BufferPoolError::InvalidSize) => {
println!("Buffer size must be greater than zero");
},
Err(BufferPoolError::PoolExhausted) => {
println!("Pool has reached maximum capacity");
},
Err(BufferPoolError::ConfigError(msg)) => {
println!("Configuration error: {}", msg);
},
Err(e) => {
println!("Unexpected error: {}", e);
}
}
The buffer pool module is designed to work seamlessly with other FRD-PU components:
Data Stream Module: Use pooled buffers for efficient file and network streaming operations.
Memory Pool Module: Combine with memory pools for comprehensive memory management strategies.
Compression Module: Reuse compression and decompression buffers to eliminate allocation overhead.
Cache Module: Use pooled buffers for cached data storage and retrieval operations.
Concurrent Module: Leverage thread-safe pools in multi-threaded data processing pipelines.
All buffer pool operations are fully thread-safe and can be used concurrently across multiple threads without additional synchronization. The implementation uses efficient locking mechanisms that minimize contention while ensuring data integrity.
Global pools are implemented as thread-safe singletons and can be accessed from any thread without explicit synchronization. This makes them ideal for use in multi-threaded applications and async runtimes.
The buffer pool system is designed for optimal memory efficiency:
Buffers are only allocated when needed and reused whenever possible. Pool sizes can be configured to balance memory usage with performance requirements. Automatic shrinking prevents memory leaks in long-running applications.
The system tracks comprehensive statistics to help optimize memory usage patterns in production environments. Buffer capacity is preserved during reuse to minimize allocation overhead while ensuring buffers meet size requirements.
When deploying buffer pools in production environments, consider these factors:
Monitoring: Use the built-in statistics to monitor buffer usage patterns, allocation rates, and pool efficiency.
Configuration: Tune pool sizes based on actual usage patterns and available memory. Start with conservative settings and increase based on monitoring data.
Error Handling: Implement proper error handling for all buffer pool operations, particularly in high-availability systems.
Memory Pressure: Monitor system memory usage and implement pool shrinking strategies during low-usage periods.
Performance Testing: Benchmark buffer pool performance under expected load patterns to validate configuration settings.
The buffer pool module represents a critical component for building high-performance, memory-efficient applications that can scale to handle massive data processing workloads while maintaining minimal resource consumption.
Download the Library to see the new features
----------------- universal processor ---------------------
The Universal Processor is a revolutionary adaptive processing engine that maintains the same efficiency whether processing 1 byte or 1 terabyte of data. Unlike traditional processing systems that require different algorithms for different data sizes, the Universal Processor implements "fractal processing" - a single algorithm that scales fractally and self-optimizes based on data patterns it discovers in real-time.
The Universal Processor embodies the FRD-PU principle of "Do more with less" by providing:
use frd_pu::{UniversalProcessor, process_adaptive};
// Method 1: Using convenience function (recommended for beginners)
let mut data = vec![1, 2, 3, 4, 5];
process_adaptive(&mut data, |x| *x *= 2)?;
// data is now [2, 4, 6, 8, 10]
// Method 2: Using processor instance
let processor = UniversalProcessor::new();
let mut data = vec![1.0, 2.0, 3.0, 4.0];
processor.execute(&mut data, |x| *x = x.sqrt())?;
use frd_pu::{create_transform_processor, create_aggregate_processor};
// For data transformation operations
let transform_proc = create_transform_processor();
let mut numbers = vec![1, 2, 3, 4, 5];
transform_proc.execute(&mut numbers, |x| *x = *x * *x)?;
// For aggregation operations (sum, count, average)
let agg_proc = create_aggregate_processor();
// Processing logic automatically optimized for aggregation patterns
The main processing engine that adapts to any computational task.
Creation Methods:
// Default processor with real-time optimization
let processor = UniversalProcessor::new();
// Customized processor
let processor = UniversalProcessor::new()
.with_pattern(ProcessingPattern::Transform)
.with_scaling(ScalingBehavior::Fractal)
.with_optimization(OptimizationMode::Throughput);
Key Methods:
execute: Process mutable data with automatic optimization
processor.execute(&mut data, |item| {
// Transform each item
*item = process_item(*item);
})?;
execute_custom: Process immutable data with custom return type
let result = processor.execute_custom(&data, |slice| {
// Custom processing that returns a result
slice.iter().sum::<i32>()
})?;
execute_batch: Process multiple data batches efficiently
let mut batch1 = vec![1, 2, 3];
let mut batch2 = vec![4, 5, 6];
let batches = vec![batch1.as_mut_slice(), batch2.as_mut_slice()];
processor.execute_batch(&mut batches, |x| *x += 10)?;
analyze_pattern: Analyze data characteristics for optimization
let data_bytes = vec![1u8, 2, 3, 4, 5, 1, 2, 3];
let pattern = processor.analyze_pattern(&data_bytes)?;
println!("Repetition factor: {}", pattern.repetition_factor);
println!("Optimal chunk size: {}", pattern.optimal_chunk_size);
Defines the type of operation being performed for optimal strategy selection.
Variants:
Usage:
let processor = UniversalProcessor::new()
.with_pattern(ProcessingPattern::Mathematical);
// Processor now optimizes for mathematical operations
let mut values = vec![1.0, 4.0, 9.0, 16.0];
processor.execute(&mut values, |x| *x = x.sqrt())?;
Controls how the processor scales with data size.
Variants:
Usage:
// For maximum efficiency at any scale
let processor = UniversalProcessor::new()
.with_scaling(ScalingBehavior::Fractal);
// For automatic strategy selection
let processor = UniversalProcessor::new()
.with_scaling(ScalingBehavior::Adaptive);
Defines performance optimization priorities.
Variants:
Usage:
// Optimize for maximum throughput
let processor = UniversalProcessor::new()
.with_optimization(OptimizationMode::Throughput);
// Custom optimization weights (latency: 30%, throughput: 50%, memory: 20%)
let processor = UniversalProcessor::new()
.with_optimization(OptimizationMode::Custom {
latency: 3,
throughput: 5,
memory: 2
});
The Universal Processor automatically analyzes data patterns and caches optimization strategies:
let processor = UniversalProcessor::new();
// First analysis discovers pattern and caches optimization
let data1 = vec![1u8; 1000]; // Low entropy, high repetition
let pattern = processor.analyze_pattern(&data1)?;
println!("Entropy: {:.2}", pattern.entropy);
println!("Repetition: {:.2}", pattern.repetition_factor);
println!("Confidence: {:.2}", pattern.confidence);
// Similar data will reuse cached optimization strategy
let data2 = vec![2u8; 1000];
processor.analyze_pattern(&data2)?; // Uses cached pattern
Track processing statistics for performance optimization:
let processor = UniversalProcessor::new();
// Perform multiple operations
processor.execute(&mut data1, |x| *x += 1)?;
processor.execute(&mut data2, |x| *x *= 2)?;
// Check performance statistics
let stats = processor.stats()?;
println!("Total operations: {}", stats.total_operations);
println!("Average throughput: {:.2} bytes/sec", stats.average_throughput);
println!("Memory efficiency: {:.2}", stats.memory_efficiency);
println!("Pattern accuracy: {:.2}", stats.pattern_accuracy);
// Clear statistics for fresh monitoring
processor.clear_stats()?;
The processor adapts to changing system conditions:
let mut processor = UniversalProcessor::new();
// Check current system context
let context = processor.context();
println!("Available CPU cores: {}", context.cpu_cores);
println!("Preferred chunk size: {}", context.preferred_chunk_size);
println!("Max parallel workers: {}", context.max_parallel_workers);
// Update context if system conditions change
processor.update_context();
Efficiently process multiple related datasets:
let processor = UniversalProcessor::new()
.with_scaling(ScalingBehavior::Batch);
// Prepare multiple data batches
let mut dataset1 = vec![1, 2, 3, 4, 5];
let mut dataset2 = vec![10, 20, 30, 40, 50];
let mut dataset3 = vec![100, 200, 300, 400, 500];
let mut batches = vec![
dataset1.as_mut_slice(),
dataset2.as_mut_slice(),
dataset3.as_mut_slice()
];
// Process all batches with shared optimization
processor.execute_batch(&mut batches, |x| *x = *x * *x)?;
Pre-configured processors for common use cases:
Transform Processor: Optimized for data transformation
use frd_pu::create_transform_processor;
let processor = create_transform_processor();
let mut data = vec!["hello", "world", "rust"];
processor.execute(&mut data, |s| {
*s = s.to_uppercase().as_str(); // This won't compile as shown
})?;
// Better approach for string transformation
let data = vec!["hello", "world", "rust"];
let result = processor.execute_custom(&data, |slice| {
slice.iter().map(|s| s.to_uppercase()).collect::<Vec<_>>()
})?;
Aggregate Processor: Optimized for aggregation operations
use frd_pu::create_aggregate_processor;
let processor = create_aggregate_processor();
let data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
let sum = processor.execute_custom(&data, |slice| {
slice.iter().sum::<i32>()
})?;
let average = processor.execute_custom(&data, |slice| {
slice.iter().sum::<i32>() as f64 / slice.len() as f64
})?;
Filter Processor: Optimized for filtering operations
use frd_pu::create_filter_processor;
let processor = create_filter_processor();
let mut data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
// Mark even numbers for removal (set to 0)
processor.execute(&mut data, |x| {
if *x % 2 == 0 {
*x = 0;
}
})?;
// Filter out zeros
data.retain(|&x| x != 0);
// data is now [1, 3, 5, 7, 9]
Math Processor: Optimized for mathematical operations
use frd_pu::create_math_processor;
let processor = create_math_processor();
let mut data = vec![1.0, 4.0, 9.0, 16.0, 25.0];
// Calculate square roots
processor.execute(&mut data, |x| *x = x.sqrt())?;
// data is now [1.0, 2.0, 3.0, 4.0, 5.0]
// Complex mathematical operation
let mut complex_data = vec![1.0, 2.0, 3.0, 4.0];
processor.execute(&mut complex_data, |x| {
*x = (*x * 2.0 + 1.0).sin().abs();
})?;
process_adaptive: Automatic pattern detection and optimization
use frd_pu::process_adaptive;
let mut data = vec![1, 2, 3, 4, 5];
process_adaptive(&mut data, |x| *x = *x * *x + 1)?;
// Automatically chooses best processing strategy
process_fractal: Maximum efficiency fractal processing
use frd_pu::process_fractal;
let mut large_data: Vec<i32> = (0..1000000).collect();
process_fractal(&mut large_data, |x| *x += 1)?;
// Maintains efficiency even for very large datasets
For small datasets, the processor automatically uses sequential processing:
let processor = UniversalProcessor::new()
.with_optimization(OptimizationMode::Latency);
let mut small_data = vec![1, 2, 3, 4, 5];
processor.execute(&mut small_data, |x| *x *= 2)?;
// Uses optimized sequential processing
Medium datasets benefit from parallel processing:
let processor = UniversalProcessor::new()
.with_optimization(OptimizationMode::RealTime);
let mut medium_data: Vec<i32> = (0..10000).collect();
processor.execute(&mut medium_data, |x| *x = *x * *x)?;
// Automatically uses parallel processing
Large datasets use fractal processing for maximum efficiency:
let processor = UniversalProcessor::new()
.with_scaling(ScalingBehavior::Fractal)
.with_optimization(OptimizationMode::Throughput);
let mut large_data: Vec<f64> = (0..1000000).map(|x| x as f64).collect();
processor.execute(&mut large_data, |x| *x = x.sqrt())?;
// Uses fractal processing to maintain efficiency
For limited memory situations:
let processor = UniversalProcessor::new()
.with_optimization(OptimizationMode::Memory)
.with_scaling(ScalingBehavior::Adaptive);
// Process in streaming chunks to minimize memory usage
let mut huge_data: Vec<i32> = (0..10000000).collect();
processor.execute(&mut huge_data, |x| *x += 1)?;
// Automatically uses streaming processing
The Universal Processor uses comprehensive error handling:
use frd_pu::{UniversalProcessor, UniversalProcessorError};
let processor = UniversalProcessor::new();
match processor.execute(&mut data, operation) {
Ok(()) => println!("Processing completed successfully"),
Err(UniversalProcessorError::InvalidInput) => {
println!("Input data is invalid or corrupted");
},
Err(UniversalProcessorError::ProcessingFailed(msg)) => {
println!("Processing failed: {}", msg);
},
Err(UniversalProcessorError::InsufficientResources) => {
println!("Not enough system resources");
},
Err(UniversalProcessorError::PatternAnalysisError) => {
println!("Could not analyze data pattern");
},
Err(UniversalProcessorError::ConfigError(msg)) => {
println!("Configuration error: {}", msg);
},
Err(UniversalProcessorError::UnsupportedOperation) => {
println!("Operation not supported for current data type");
},
}
use frd_pu::create_transform_processor;
// Simulate RGB pixel data
#[derive(Clone)]
struct Pixel { r: u8, g: u8, b: u8 }
let processor = create_transform_processor();
let mut image_data = vec![
Pixel { r: 100, g: 150, b: 200 };
1920 * 1080 // Full HD image
];
// Apply brightness adjustment
processor.execute(&mut image_data, |pixel| {
pixel.r = (pixel.r as f32 * 1.2).min(255.0) as u8;
pixel.g = (pixel.g as f32 * 1.2).min(255.0) as u8;
pixel.b = (pixel.b as f32 * 1.2).min(255.0) as u8;
})?;
// Automatically uses optimal processing strategy for image size
use frd_pu::create_aggregate_processor;
#[derive(Clone)]
struct StockPrice {
timestamp: u64,
price: f64,
volume: u64,
}
let processor = create_aggregate_processor();
let stock_data = vec![
StockPrice { timestamp: 1, price: 100.0, volume: 1000 };
100000 // 100k data points
];
// Calculate moving average
let moving_avg = processor.execute_custom(&stock_data, |data| {
let window_size = 20;
let mut averages = Vec::new();
for i in window_size..data.len() {
let sum: f64 = data[i-window_size..i].iter()
.map(|stock| stock.price)
.sum();
averages.push(sum / window_size as f64);
}
averages
})?;
use frd_pu::create_math_processor;
let processor = create_math_processor();
// Simulate 3D coordinates
#[derive(Clone)]
struct Vector3D { x: f64, y: f64, z: f64 }
let mut points = vec![
Vector3D { x: 1.0, y: 2.0, z: 3.0 };
1000000 // 1 million 3D points
];
// Normalize all vectors
processor.execute(&mut points, |point| {
let magnitude = (point.x * point.x +
point.y * point.y +
point.z * point.z).sqrt();
if magnitude > 0.0 {
point.x /= magnitude;
point.y /= magnitude;
point.z /= magnitude;
}
})?;
// Uses fractal processing for maximum efficiency
use frd_pu::create_filter_processor;
let processor = create_filter_processor();
let text_data = vec![
"The", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"
];
// Filter and transform words
let filtered_words = processor.execute_custom(&text_data, |words| {
words.iter()
.filter(|word| word.len() > 3) // Only words longer than 3 chars
.map(|word| word.to_uppercase())
.collect::<Vec<_>>()
})?;
// Result: ["QUICK", "BROWN", "JUMPS", "OVER", "LAZY"]
use frd_pu::process_adaptive;
#[derive(Clone)]
struct SensorReading {
sensor_id: u32,
temperature: f32,
humidity: f32,
timestamp: u64,
}
let mut sensor_data = vec![
SensorReading {
sensor_id: 1,
temperature: 25.0,
humidity: 60.0,
timestamp: 1234567890
};
50000 // 50k sensor readings
];
// Apply calibration corrections
process_adaptive(&mut sensor_data, |reading| {
// Apply sensor-specific calibration
match reading.sensor_id {
1 => reading.temperature += 0.5, // Sensor 1 runs cold
2 => reading.humidity -= 2.0, // Sensor 2 reads high
_ => {}
}
// Convert Celsius to Fahrenheit
reading.temperature = reading.temperature * 9.0 / 5.0 + 32.0;
})?;
ProcessingPattern::Transform for element-by-element modificationsProcessingPattern::Aggregate for reductions and summationsProcessingPattern::Filter for conditional processingProcessingPattern::Mathematical for complex calculationsOptimizationMode::Latency for real-time applicationsOptimizationMode::Throughput for batch processingOptimizationMode::Memory for resource-constrained environmentsOptimizationMode::RealTime for balanced performanceScalingBehavior::Fractal for maximum efficiency across all data sizesScalingBehavior::Adaptive when data size varies significantlyScalingBehavior::Linear for predictable, uniform processingScalingBehavior::Batch for processing multiple similar datasetsclear_stats() periodically to prevent memory growth in long-running applicationsOptimizationMode::Memory for large datasets in constrained environmentsThe Universal Processor integrates seamlessly with other FRD-PU modules:
use frd_pu::{
UniversalProcessor,
create_transform_processor,
BloomFilter,
LruCache,
compress_data
};
// Combined processing pipeline
let processor = create_transform_processor();
let mut cache = LruCache::new(1000);
let mut bloom_filter = BloomFilter::new(10000, 0.01)?;
let mut data = vec![1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
// Process data
processor.execute(&mut data, |x| *x *= 2)?;
// Cache results
for &value in &data {
cache.put(value, value * 10);
}
// Check membership
for &value in &data {
bloom_filter.insert(&value.to_be_bytes());
}
// Compress processed data
let data_bytes: Vec<u8> = data.iter()
.flat_map(|&x| x.to_be_bytes().to_vec())
.collect();
let compressed = compress_data(&data_bytes)?;
For the complete source code, examples, and contribution guidelines, visit the FRD-PU repository: https://github.com/3Rr0RHACK3R/frd-pu
The Universal Processor module represents the pinnacle of adaptive processing technology, providing unmatched efficiency and scalability while maintaining the FRD-PU philosophy of zero dependencies and minimal resource consumption.
------------------- TCP Server Module ----------------------
tcp_server Module DocumentationOverviewThe tcp_server module provides a production-ready, high-performance, and cross-platform TCP server. It is built with the core FRD-PU philosophy of zero dependencies and maximum efficiency.Key Features:Cross-Platform: Works seamlessly on Windows, Linux, and macOS.Non-blocking Architecture: Uses a non-blocking, single-threaded event loop to handle massive concurrency without the overhead of managing multiple threads for connections.Advanced Connection Management: Supports TCP keep-alive and provides hooks for graceful connection shutdown.Built-in DoS Protection: Includes a configurable rate limiter for each connection.Comprehensive Statistics: Offers detailed, real-time metrics for monitoring server health and performance.Full IPv4 & IPv6 Support.Quick StartThe easiest way to get started is by using one of the convenience functions. This example creates a simple echo server that listens on 127.0.0.1:8080.use frd_pu::tcp_server::{new_echo_server, TcpServerError};
fn main() -> Result<(), TcpServerError> {
let mut server = new_echo_server("127.0.0.1:8080")?;
println!("Echo server listening on 127.0.0.1:8080");
// This is a blocking call that runs the server's event loop.
server.serve()
}
ConnectionHandler TraitTo implement custom logic, you must create a struct that implements the ConnectionHandler trait. This trait is the core of your server's application logic.pub trait ConnectionHandler: Send + Sync + 'static {
// Required method to process incoming data.
fn handle_data(&self, data: &[u8], addr: SocketAddr) -> Option<Vec
// Optional lifecycle hooks.
fn on_connect(&self, addr: SocketAddr) {}
fn on_disconnect(&self, addr: SocketAddr) {}
fn on_server_start(&self) {}
fn on_server_shutdown(&self) {}
// Health check for monitoring.
fn health_check(&self) -> bool { true }
} Example: Custom HandlerHere is an example of a simple handler that processes text-based commands.use frd_pu::tcp_server::{ConnectionHandler, new_tcp_server, TcpServerError}; use std::net::SocketAddr;
// 1. Define your handler struct. struct CommandHandler;
// 2. Implement the ConnectionHandler trait.
impl ConnectionHandler for CommandHandler {
fn handle_data(&self, data: &[u8], _addr: SocketAddr) -> Option<Vec
let response = match command.as_str() {
"ping" => "pong",
"time" => "It's time to code!",
_ => "Unknown command",
};
// Return the response to be sent to the client.
// The response must be converted to bytes.
Some(response.as_bytes().to_vec())
}
fn on_connect(&self, addr: SocketAddr) {
println!("New client connected: {}", addr);
}
fn on_disconnect(&self, addr: SocketAddr) {
println!("Client disconnected: {}", addr);
}
}
fn main() -> Result<(), TcpServerError> { // 3. Create a server with an instance of your handler. let handler = CommandHandler; let mut server = new_tcp_server("127.0.0.1:9001", handler)?; println!("Command server listening on 127.0.0.1:9001"); server.serve() } Server ConfigurationThe server can be extensively customized using the ServerConfig struct. You can create a server with a custom configuration using the new_tcp_server_with_config function.ServerConfig Struct#[derive(Debug, Clone)] pub struct ServerConfig { pub max_connections: usize, // Default: 10,000 pub buffer_size: usize, // Default: 64 KB pub timeout: Duration, // Default: 300 seconds (5 minutes) pub rate_limit: usize, // Default: 1000 requests/sec pub rate_limit_window: Duration, // Default: 1 second pub max_request_size: usize, // Default: 1 MB pub enable_keepalive: bool, // Default: true pub enable_nodelay: bool, // Default: true pub socket_buffer_size: usize, // Default: 256 KB // ... other fields }
// To get a default configuration: let config = ServerConfig::default(); Example: Custom Configurationuse frd_pu::tcp_server::{new_tcp_server_with_config, ServerConfig, EchoHandler, TcpServerError}; use std::time::Duration;
fn main() -> Result<(), TcpServerError> { let config = ServerConfig { max_connections: 100, timeout: Duration::from_secs(60), rate_limit: 10, // 10 requests per second ..ServerConfig::default() };
let mut server = new_tcp_server_with_config("127.0.0.1:8080", EchoHandler, config)?;
server.serve()
} Monitoring and StatisticsThe server keeps detailed statistics about its operation. You can retrieve these stats at any time without impacting performance.ConnectionStatsSnapshot StructThe get_stats() method returns a ConnectionStatsSnapshot, which contains a copy of the server's current metrics.#[derive(Debug, Clone)] pub struct ConnectionStatsSnapshot { pub total_connections: usize, pub active_connections: usize, pub bytes_received: u64, pub bytes_sent: u64, pub messages_processed: u64, pub errors_count: usize, pub rate_limited_count: usize, pub timeout_count: usize, pub uptime_seconds: u64, } Example: Printing StatsYou can use the built-in print_stats() method or retrieve the snapshot for custom logging.// In a separate thread, you could monitor the server. // let server_arc = Arc::new(server); // Server would need to be wrapped in Arc/Mutex
// Periodically... // server_arc.lock().unwrap().print_stats();
// Or get the data for custom logging: // let stats = server_arc.lock().unwrap().get_stats(); // println!("Active connections: {}", stats.active_connections); Error HandlingThe tcp_server module uses a comprehensive TcpServerError enum to report issues. All public functions that can fail will return a Result<T, TcpServerError>.TcpServerError Enum#[derive(Debug)] pub enum TcpServerError { IoError(io::Error), BindError { addr: String, source: io::Error }, ServerShutdown, ConnectionError { addr: SocketAddr, source: io::Error }, BufferOverflow { size: usize, max: usize }, RateLimitExceeded { addr: SocketAddr, limit: usize }, TimeoutError { addr: SocketAddr, timeout: Duration }, ConfigurationError(String), SocketOptionError { description: String, source: io::Error }, // ... other variants } Advanced GuideGraceful ShutdownFor production applications, you need a way to shut down the server gracefully. The serve() method is blocking, so it should be run in a separate thread. You can then use the stop() method to signal the server to shut down.use frd_pu::tcp_server::{new_echo_server, TcpServer, EchoHandler}; use std::sync::Arc; use std::thread; use std::time::Duration;
fn main() { // The server needs to be mutable to call serve(), so we wrap it for thread safety. let server = Arc::new(new_echo_server("127.0.0.1:8080").unwrap()); let server_clone = server.clone();
// Run the server in its own thread.
let server_handle = thread::spawn(move || {
// We need a mutable reference to the server inside the thread.
// This is a bit tricky. A better approach is to not wrap the server in an Arc
// until after it's moved into the thread.
// For this example, we'll assume a more complex setup with channels.
// A simplified, conceptual example:
// let mut s = Arc::try_unwrap(server_clone).expect("Failed to get server");
// s.serve().unwrap();
println!("Server thread would run here.");
});
// In your main thread, you can wait for a shutdown signal (e.g., Ctrl-C).
println!("Server is running. Press Ctrl-C to stop.");
// In a real app, you'd use a crate like `ctrlc` to handle this.
thread::sleep(Duration::from_secs(10)); // Simulate running for 10 seconds.
// Signal the server to stop.
println!("Shutting down server...");
server.stop();
// Wait for the server thread to finish.
// server_handle.join().unwrap();
println!("Server shut down gracefully.");
} Note: The above example is conceptual. A robust implementation would use channels or a library like tokio for managing the server lifecycle.Sharing State in a HandlerSince the ConnectionHandler is shared across all potential connections, any state it holds must be thread-safe. Use Arc and Mutex (or RwLock) for this.use std::sync::{Arc, Mutex}; use std::collections::HashMap;
struct AppState { user_sessions: HashMap<SocketAddr, String>, }
// Your handler now holds a reference to the shared state.
struct StatefulHandler {
state: Arc<Mutex
impl ConnectionHandler for StatefulHandler { fn on_connect(&self, addr: SocketAddr) { let mut state = self.state.lock().unwrap(); state.user_sessions.insert(addr, "new_user".to_string()); println!("Total sessions: {}", state.user_sessions.len()); }
fn on_disconnect(&self, addr: SocketAddr) {
let mut state = self.state.lock().unwrap();
state.user_sessions.remove(&addr);
println!("Total sessions: {}", state.user_sessions.len());
}
// handle_data would also lock the state to read/write session data.
fn handle_data(&self, data: &[u8], addr: SocketAddr) -> Option<Vec<u8>> {
let state = self.state.lock().unwrap();
if let Some(user) = state.user_sessions.get(&addr) {
// Process data for the user...
}
Some(b"OK".to_vec())
}
}
// In main(): let state = Arc::new(Mutex::new(AppState { user_sessions: HashMap::new() })); let handler = StatefulHandler { state: state.clone() }; // let mut server = new_tcp_server("...", handler)?; Protocol Design: Message FramingTCP is a stream-based protocol, not a message-based one. This means data can arrive in arbitrary chunks. If you send "HELLO" and "WORLD" from a client, the server might receive "HE", "LLOWOR", and "LD" in three separate handle_data calls.To handle this, you must implement message framing. A common technique is to prefix each message with its length.Example: Length-Prefixed FramingClient:Message: HELLO (5 bytes)Prefix: 5 as a 4-byte integer (e.g., [0, 0, 0, 5])Send: [0, 0, 0, 5, H, E, L, L, O]Server (handle_data):Maintain a buffer for each connection.Append incoming data to the connection's buffer.Check if the buffer contains at least 4 bytes (for the length prefix).If yes, read the length L.Check if the buffer contains at least 4 + L bytes.If yes, you have a complete message. Process the L bytes, and remove 4 + L bytes from the buffer.Repeat until the buffer doesn't contain a full message.This logic would be managed inside your ConnectionHandler, likely using a HashMap to store buffers for each SocketAddr.
--------------- Vector and Matrix ---------------- WARNING: different slang in the documentation because we have a slightly funnier coder
Because who needs BLAS when you can roll your own matrix multiplication? 🤡
This module is the linear algebra equivalent of "I can fix her" energy - it's a zero-dependency, high-performance matrix and vector library that thinks it can replace decades of optimized math libraries with some spicy SIMD instructions. Spoiler alert: it's actually pretty decent!
The module provides two APIs:
Matrix<'a>A matrix that stores f64 values in row-major order (because we're not monsters who use column-major).
pub struct Matrix<'a> {
data: Vec<f64>, // The actual numbers, stored flat like your jokes
rows: usize, // How many rows we pretending to have
cols: usize, // How many columns we got
_phantom: std::marker::PhantomData<&'a mut [f64]>, // Lifetime wizardry ✨
}
Matrix::new(rows: usize, cols: usize) -> SelfCreates a new matrix filled with zeros, because starting from nothing is very relatable.
Example:
let matrix = Matrix::new(3, 4); // 3x4 matrix of pure disappointment (zeros)
Matrix::from_slice_mut<'a>(rows: usize, cols: usize, data_slice: &'a mut [f64]) -> Matrix<'a>The "I brought my own data" constructor. Takes a mutable slice and yeets it into a Vec using some unsafe magic that would make your CS professor cry.
⚠️ WARNING: This function uses Vec::from_raw_parts and std::mem::forget. It's like doing parkour - looks cool but one wrong move and you're debugging memory corruption at 3 AM.
Panics: If your slice size doesn't match rows * cols. The code will roast you harder than Gordon Ramsay.
Example:
let mut data = vec![1.0, 2.0, 3.0, 4.0];
let matrix = Matrix::from_slice_mut(2, 2, &mut data);
get(&self, row: usize, col: usize) -> Option<&f64>Gets an element without panicking like a civilized function. Returns None if you're trying to access the matrix equivalent of your dating life (out of bounds).
get_mut(&mut self, row: usize, col: usize) -> Option<&mut f64>Like get() but spicier - lets you actually change the values.
mul_matrix_fast(&self, other: &Self) -> Option<Self>The star of the show! Matrix multiplication that's faster than your ex's rebound relationship.
How it works:
None if dimensions don't match (with a passive-aggressive error message)SIMD Magic: If your CPU supports AVX2, it processes 4 elements at once like a mathematical speed demon.
Vector<'a>A vector that's basically a fancy array with delusions of grandeur.
pub struct Vector<'a> {
data: Vec<f64>, // The numbers go brrr
len: usize, // How long is this thing
_phantom: std::marker::PhantomData<&'a mut [f64]>, // More lifetime sorcery
}
Vector::new(len: usize) -> SelfCreates a zero-filled vector for when you need to start your disappointment from scratch.
Vector::from_slice_mut<'a>(len: usize, data_slice: &'a mut [f64]) -> Vector<'a>Same energy as the Matrix version - takes your slice and does unsafe things to it.
get(&self, index: usize) -> Option<&f64>Safe element access that won't crash your program (unlike your life choices).
get_mut(&mut self, index: usize) -> Option<&mut f64>Mutable access for when you need to change things up.
dot(&self, other: &Self) -> Option<f64>Calculates the dot product with SIMD optimization because regular multiplication is for peasants.
Returns None if vector lengths don't match (skill issue).
This library is flexing harder than a gym bro with its AVX2 optimizations:
mul_matrix_avx)f64 elements simultaneously_mm256_add_pd and _mm256_mul_pdunsafe because we live dangerouslyUses standard Rust Vec and global allocator. Safe, predictable, boring.
let mut matrix = Matrix::new(100, 100);
let mut vector = Vector::new(100);
// Rust handles memory like a responsible adult
Takes pre-allocated slices and does dark magic with Vec::from_raw_parts.
let mut data = vec![0.0; 10000];
let matrix = Matrix::from_slice_mut(100, 100, &mut data);
// You're now responsible for not breaking everything
The library uses conditional compilation for SIMD:
#[cfg(target_feature = "avx2")] - For the chosen CPUs#[cfg(not(target_feature = "avx2"))] - For everyone else (sad)This library believes in:
Option types instead of panicking (mostly)eprintln!// Create matrices the easy way
let mut a = Matrix::new(3, 3);
let mut b = Matrix::new(3, 3);
// Fill them with actual data (not shown because that's homework)
// Multiply like a boss
if let Some(result) = a.mul_matrix_fast(&b) {
println!("Success! We did math!");
} else {
println!("Skill issue detected");
}
// Vector operations
let v1 = Vector::new(1000);
let v2 = Vector::new(1000);
if let Some(dot_product) = v1.dot(&v2) {
println!("Dot product: {}", dot_product);
}
According to the comments, future work includes:
The manual memory management API is about as safe as juggling chainsaws:
unsafe blocks with Vec::from_raw_partsThis library is like that friend who insists on making everything from scratch - admirable dedication, questionable life choices, but surprisingly good results. It's perfect for when you want to feel superior to people using "bloated" libraries like NumPy or Eigen, while secretly hoping nobody benchmarks your code against actual optimized BLAS implementations.
Remember: With great SIMD comes great segfaults. 🚀💥