Skip to content

A memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.

License

Notifications You must be signed in to change notification settings

TIVerse/fastalloc

⚡ fastalloc

Crates.io Crates.io Crates.io

Documentation

CI Codecov

Miri

Rust Version MSRV unsafe forbidden Rust Documentation

License Contributors PRs Welcome GitHub stars GitHub forks GitHub issues GitHub pull requests

A high-performance memory pooling library for Rust with type-safe handles and zero-cost abstractions

🚀 Up to 1.4x faster allocation with predictable latency and zero fragmentation

🛠 Perfect for: Game engines, real-time systems, embedded applications, and high-churn workloads

📖 Overview

fastalloc is a memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.

Why fastalloc?

  • Blazing Fast: Significantly reduces allocation/deallocation overhead
  • 🧠 Smart Memory Management: Reduces memory fragmentation and improves cache locality
  • 🛡️ Memory Safe: Leverages Rust's type system for safety without sacrificing performance
  • 🔄 Flexible: Multiple allocation strategies and pool types for different use cases
  • 🌐 no_std Support: Works in embedded and bare-metal environments

✨ Features

  • Multiple Pool Types:

    • Fixed-size pools for predictable memory usage
    • Growing pools for dynamic workloads
    • Thread-local and thread-safe variants
  • Advanced Allocation Strategies:

    • Stack-based (LIFO) for maximum speed
    • Free-list for better memory utilization
    • Bitmap-based for precise control
  • Performance Optimizations:

    • Lock-free operations where possible
    • Cache-line alignment
    • Zero-copy access patterns
  • Developer Experience:

    • Type-safe handles with RAII
    • Detailed metrics and statistics
    • Comprehensive documentation with examples
    • Extensive test coverage

A memory pooling library for Rust with type-safe handles and RAII-based memory management. Provides 1.3-1.4x faster allocation than standard heap with the key benefits of predictable latency, zero fragmentation, and excellent cache locality.

Version 1.5.0 - Production-ready release with performance optimizations and comprehensive documentation. Repository: TIVerse/fastalloc.

🚀 Perfect for: Real-time systems, game engines, embedded devices, and high-churn workloads

💡 Key Benefits: Predictable latency, zero fragmentation, improved cache locality, deterministic behavior

Documentation:

✨ Key Features

  • 🚀 Multiple pool types: Fixed-size, growing, thread-local, and thread-safe pools
  • 🔒 Type-safe handles: RAII-based handles that automatically return objects to the pool
  • ⚙️ Flexible configuration: Builder pattern with extensive customization options
  • 📊 Optional statistics: Track allocation patterns and pool usage
  • 🔧 Multiple allocation strategies: Stack (LIFO), free-list, and bitmap allocators
  • 🌐 no_std support: Works in embedded and bare-metal environments
  • Zero-copy: Direct memory access without extra indirection
  • 🛡️ Memory safe: Leverage Rust's type system to prevent leaks and use-after-free
  • 🎯 Cache-friendly: Configurable alignment for optimal CPU cache utilization
  • 📦 Small footprint: Minimal dependencies, < 3K SLOC core library

🚀 Quick Start

Installation

Add this to your Cargo.toml:

[dependencies]
fastalloc = "1.0"

Basic Usage

use fastalloc::FixedPool;

fn main() {
    // Create a pool that can hold up to 1000 integers
    let pool = FixedPool::<i32>::new(1000).expect("Failed to create pool");
    
    // Allocate an integer from the pool
    let mut handle = pool.allocate(42).expect("Failed to allocate");
    
    // Use the allocated value
    *handle += 1;
    println!("Value: {}", *handle);
    
    // The handle is automatically returned to the pool when dropped
}

Thread-Safe Usage

use std::sync::Arc;
use fastalloc::ThreadSafePool;
use std::thread;

fn main() {
    // Create a thread-safe pool
    let pool = Arc::new(ThreadSafePool::<u64>::new(100).unwrap());
    
    let mut handles = vec![];
    
    for i in 0..10 {
        let pool = Arc::clone(&pool);
        handles.push(thread::spawn(move || {
            let mut value = pool.allocate(i).unwrap();
            *value *= 2;
            *value
        }));
    }
    
    for handle in handles {
        println!("Thread result: {}", handle.join().unwrap());
    }
}

let mut handle = pool.allocate(42).unwrap();

// Use the value
assert_eq!(*handle, 42);
*handle = 100;
assert_eq!(*handle, 100);

// Automatically returned to pool when handle is dropped
drop(handle);

🎯 Why Use Memory Pools?

Memory pools significantly improve performance in scenarios with frequent allocations:

Perfect Use Cases

Domain Use Case Why It Matters
🎮 Game Development Entities, particles, physics objects Maintain 60+ FPS by eliminating allocation stutter
🎵 Real-Time Systems Audio buffers, robotics control loops Predictable latency for hard real-time constraints
🌐 Web Servers Request handlers, connection pooling Handle 100K+ req/sec with minimal overhead
📊 Data Processing Temporary objects in hot paths 50-100x speedup in tight loops
🔬 Scientific Computing Matrices, particles, graph nodes Process millions of objects efficiently
📱 Embedded Systems Sensor data, IoT devices Predictable memory usage, no fragmentation
🤖 Machine Learning Tensor buffers, batch processing Reduce training time, optimize inference
💰 Financial Systems Order books, market data Ultra-low latency trading systems

⚡ Performance

Benchmark Results (criterion.rs, release mode with LTO):

Operation fastalloc Standard Heap Improvement
Fixed pool allocation (i32) ~3.5 ns ~4.8 ns 1.3-1.4x faster
Growing pool allocation ~4.6 ns ~4.8 ns ~1.05x faster
Allocation reuse (LIFO) ~7.2 ns N/A Excellent cache locality

See BENCHMARKS.md for detailed methodology and results.

When Pools Excel

Memory pools provide benefits beyond raw speed:

  1. Predictable Latency: No allocation spikes or fragmentation slowdowns
  2. Cache Locality: Objects stored contiguously improve cache hit rates
  3. Reduced Fragmentation: Eliminates long-term heap fragmentation
  4. Real-Time Guarantees: Bounded worst-case allocation time

Best use cases:

  • High allocation/deallocation churn (game entities, particles)
  • Real-time systems requiring bounded latency
  • Embedded systems with constrained memory
  • Long-running processes avoiding fragmentation

Note: Modern system allocators (jemalloc, mimalloc) are highly optimized. Pools excel in specific scenarios rather than universally. Always benchmark your specific workload.

Examples

Growing Pool with Configuration

use fastalloc::{GrowingPool, PoolConfig, GrowthStrategy};

let config = PoolConfig::builder()
    .capacity(100)
    .max_capacity(Some(1000))
    .growth_strategy(GrowthStrategy::Exponential { factor: 2.0 })
    .alignment(64) // Cache-line aligned
    .build()
    .unwrap();

let pool = GrowingPool::with_config(config).unwrap();

Thread-Safe Pool

use fastalloc::ThreadSafePool;
use std::sync::Arc;
use std::thread;

let pool = Arc::new(ThreadSafePool::<i32>::new(1000).unwrap());

let mut handles = vec![];
for i in 0..4 {
    let pool_clone = Arc::clone(&pool);
    handles.push(thread::spawn(move || {
        let handle = pool_clone.allocate(i * 100).unwrap();
        *handle
    }));
}

for handle in handles {
    println!("Result: {}", handle.join().unwrap());
}

Custom Initialization

use fastalloc::{PoolConfig, InitializationStrategy};

let config = PoolConfig::builder()
    .capacity(100)
    .reset_fn(
        || Vec::with_capacity(1024),
        |v| v.clear(),
    )
    .build()
    .unwrap();

Batch Allocation

use fastalloc::FixedPool;

let pool = FixedPool::new(1000).unwrap();

// Allocate multiple objects efficiently in one operation
let values = vec![1, 2, 3, 4, 5];
let handles = pool.allocate_batch(values).unwrap();

assert_eq!(handles.len(), 5);
// All handles automatically returned when dropped

Statistics Tracking

#[cfg(feature = "stats")]
{
    use fastalloc::FixedPool;
    
    let pool = FixedPool::<i32>::new(100).unwrap();
    
    // ... use pool ...
    
    let stats = pool.statistics();
    println!("Utilization: {:.1}%", stats.utilization_rate());
    println!("Total allocations: {}", stats.total_allocations);
}

🏊 Pool Types

Comparison Table

Pool Type Thread Safety Growth Overhead Best For
FixedPool Fixed Minimal Single-threaded, predictable load
GrowingPool Dynamic Low Variable workloads
ThreadLocalPool ⚠️ Per-thread Fixed Minimal High-throughput parallel
ThreadSafePool Fixed Medium Shared state, moderate contention

FixedPool

Pre-allocated fixed-size pool with O(1) operations and zero fragmentation.

let pool = FixedPool::<i32>::new(1000).unwrap();

When to use: Known maximum capacity, need absolute predictability

GrowingPool

Dynamic pool that grows based on demand according to a configurable strategy.

let pool = GrowingPool::with_config(config).unwrap();

When to use: Variable load, want automatic scaling

ThreadLocalPool

Per-thread pool that avoids synchronization overhead.

let pool = ThreadLocalPool::<i32>::new(100).unwrap();

When to use: Rayon/parallel iterators, zero-contention needed

ThreadSafePool

Lock-based concurrent pool safe for multi-threaded access.

let pool = ThreadSafePool::<i32>::new(1000).unwrap();

When to use: Shared pool across threads, moderate contention acceptable

🎛️ Optional Features

Enable optional features in your Cargo.toml:

[dependencies]
fastalloc = { version = "1.0", features = ["stats", "serde", "parking_lot"] }

Available features:

Feature Description Performance Impact
std (default) Standard library support N/A
stats Pool statistics & monitoring ~2% overhead
serde Serialization support None when unused
parking_lot Faster mutex (vs std::sync) 10-20% faster locking
crossbeam Lock-free data structures 30-50% better under contention
tracing Structured instrumentation Minimal when disabled
lock-free Experimental lock-free pool 2-3x faster (requires crossbeam)

no_std Support

fastalloc works in no_std environments:

[dependencies]
fastalloc = { version = "1.0", default-features = false }

Benchmarks

Run benchmarks with:

cargo bench

Benchmark results are available in the target/criterion directory after running the benchmarks.

Documentation

API Reference

Full API documentation is available on docs.rs.

Examples

Explore the examples/ directory for more usage examples:

  • basic_usage.rs - Basic pool usage
  • thread_safe.rs - Thread-safe pooling
  • custom_allocator.rs - Implementing custom allocation strategies
  • embedded.rs - no_std usage example

Changelog

See CHANGELOG.md for a detailed list of changes in each version.

Contributing

We welcome contributions of all kinds! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.

How to Contribute

  1. Read our Code of Conduct
  2. Check out the open issues
  3. Fork the repository and create your feature branch
  4. Make your changes and add tests
  5. Ensure all tests pass and code is properly formatted
  6. Submit a pull request with a clear description of your changes

Development Workflow

# Clone the repository
git clone https://github.com/TIVerse/fastalloc.git
cd fastalloc

# Install development dependencies
rustup component add rustfmt clippy

# Run tests
cargo test --all-features

# Run benchmarks
cargo bench

# Run lints
cargo clippy --all-targets -- -D warnings
cargo fmt -- --check

# Check for unused dependencies
cargo +nightly udeps

# Check for security vulnerabilities
cargo audit

Security

Security is important to us. If you discover any security related issues:

  1. Do NOT open a public GitHub issue
  2. Email the maintainer directly: eshanized@proton.me
  3. Include:
    • Detailed description of the vulnerability
    • Steps to reproduce
    • Potential impact
    • Suggested fix (if any)

We will acknowledge receipt within 48 hours and provide a timeline for a fix. Security issues will be prioritized and patched in expedited releases.

See SECURITY.md for our full security policy.

License

Licensed under either of:

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Acknowledgments

  • The Rust community for creating an amazing ecosystem
  • All contributors who have helped improve this project
  • Inspired by various memory pooling techniques and existing implementations
  • Built with ❤️ and Rust

🚀 Who's Using fastalloc?

We're building a list of projects using fastalloc. If you're using it, please consider adding your project!

Open Source Projects:

  • Your project here! - Open a PR to add your project

Use Cases in Production:

  • Game engines (entity/component systems, particle effects)
  • Real-time audio processing pipelines
  • High-frequency trading systems
  • Embedded robotics control loops
  • IoT device firmware
  • Web server request pooling

Research & Education:

  • Memory management tutorials
  • Rust performance optimization courses
  • Embedded systems projects

Want to be listed? Open a PR or issue with your project details!

📚 Resources

Documentation

Examples

  • examples/ - Working code examples:
    • basic_usage.rs - Getting started with FixedPool
    • thread_safe.rs - Concurrent pool usage
    • custom_allocator.rs - Custom allocation strategies
    • game_entities.rs - Game entity pooling example
    • particle_system.rs - High-performance particle system
    • async_usage.rs - Using pools with async/await
    • embedded.rs - no_std embedded example
    • statistics.rs - Pool monitoring and statistics

See CHANGELOG.md for version history.

About

A memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •