A high-performance memory pooling library for Rust with type-safe handles and zero-cost abstractions
🚀 Up to 1.4x faster allocation with predictable latency and zero fragmentation
🛠 Perfect for: Game engines, real-time systems, embedded applications, and high-churn workloads
fastalloc is a memory pooling library that provides efficient, type-safe memory management with minimal overhead. It's designed for performance-critical applications where allocation speed and memory locality matter.
- ⚡ Blazing Fast: Significantly reduces allocation/deallocation overhead
- 🧠 Smart Memory Management: Reduces memory fragmentation and improves cache locality
- 🛡️ Memory Safe: Leverages Rust's type system for safety without sacrificing performance
- 🔄 Flexible: Multiple allocation strategies and pool types for different use cases
- 🌐 no_std Support: Works in embedded and bare-metal environments
-
Multiple Pool Types:
- Fixed-size pools for predictable memory usage
- Growing pools for dynamic workloads
- Thread-local and thread-safe variants
-
Advanced Allocation Strategies:
- Stack-based (LIFO) for maximum speed
- Free-list for better memory utilization
- Bitmap-based for precise control
-
Performance Optimizations:
- Lock-free operations where possible
- Cache-line alignment
- Zero-copy access patterns
-
Developer Experience:
- Type-safe handles with RAII
- Detailed metrics and statistics
- Comprehensive documentation with examples
- Extensive test coverage
A memory pooling library for Rust with type-safe handles and RAII-based memory management. Provides 1.3-1.4x faster allocation than standard heap with the key benefits of predictable latency, zero fragmentation, and excellent cache locality.
Version 1.5.0 - Production-ready release with performance optimizations and comprehensive documentation. Repository: TIVerse/fastalloc.
🚀 Perfect for: Real-time systems, game engines, embedded devices, and high-churn workloads
💡 Key Benefits: Predictable latency, zero fragmentation, improved cache locality, deterministic behavior
Documentation:
- API Documentation - Complete API reference
- BENCHMARKS.md - Real benchmark results and methodology
- SAFETY.md - Safety guarantees and unsafe code documentation
- CONTRIBUTING.md - Contribution guidelines
- 🚀 Multiple pool types: Fixed-size, growing, thread-local, and thread-safe pools
- 🔒 Type-safe handles: RAII-based handles that automatically return objects to the pool
- ⚙️ Flexible configuration: Builder pattern with extensive customization options
- 📊 Optional statistics: Track allocation patterns and pool usage
- 🔧 Multiple allocation strategies: Stack (LIFO), free-list, and bitmap allocators
- 🌐 no_std support: Works in embedded and bare-metal environments
- ⚡ Zero-copy: Direct memory access without extra indirection
- 🛡️ Memory safe: Leverage Rust's type system to prevent leaks and use-after-free
- 🎯 Cache-friendly: Configurable alignment for optimal CPU cache utilization
- 📦 Small footprint: Minimal dependencies, < 3K SLOC core library
Add this to your Cargo.toml:
[dependencies]
fastalloc = "1.0"use fastalloc::FixedPool;
fn main() {
// Create a pool that can hold up to 1000 integers
let pool = FixedPool::<i32>::new(1000).expect("Failed to create pool");
// Allocate an integer from the pool
let mut handle = pool.allocate(42).expect("Failed to allocate");
// Use the allocated value
*handle += 1;
println!("Value: {}", *handle);
// The handle is automatically returned to the pool when dropped
}use std::sync::Arc;
use fastalloc::ThreadSafePool;
use std::thread;
fn main() {
// Create a thread-safe pool
let pool = Arc::new(ThreadSafePool::<u64>::new(100).unwrap());
let mut handles = vec![];
for i in 0..10 {
let pool = Arc::clone(&pool);
handles.push(thread::spawn(move || {
let mut value = pool.allocate(i).unwrap();
*value *= 2;
*value
}));
}
for handle in handles {
println!("Thread result: {}", handle.join().unwrap());
}
}
let mut handle = pool.allocate(42).unwrap();
// Use the value
assert_eq!(*handle, 42);
*handle = 100;
assert_eq!(*handle, 100);
// Automatically returned to pool when handle is dropped
drop(handle);Memory pools significantly improve performance in scenarios with frequent allocations:
| Domain | Use Case | Why It Matters |
|---|---|---|
| 🎮 Game Development | Entities, particles, physics objects | Maintain 60+ FPS by eliminating allocation stutter |
| 🎵 Real-Time Systems | Audio buffers, robotics control loops | Predictable latency for hard real-time constraints |
| 🌐 Web Servers | Request handlers, connection pooling | Handle 100K+ req/sec with minimal overhead |
| 📊 Data Processing | Temporary objects in hot paths | 50-100x speedup in tight loops |
| 🔬 Scientific Computing | Matrices, particles, graph nodes | Process millions of objects efficiently |
| 📱 Embedded Systems | Sensor data, IoT devices | Predictable memory usage, no fragmentation |
| 🤖 Machine Learning | Tensor buffers, batch processing | Reduce training time, optimize inference |
| 💰 Financial Systems | Order books, market data | Ultra-low latency trading systems |
Benchmark Results (criterion.rs, release mode with LTO):
| Operation | fastalloc | Standard Heap | Improvement |
|---|---|---|---|
| Fixed pool allocation (i32) | ~3.5 ns | ~4.8 ns | 1.3-1.4x faster |
| Growing pool allocation | ~4.6 ns | ~4.8 ns | ~1.05x faster |
| Allocation reuse (LIFO) | ~7.2 ns | N/A | Excellent cache locality |
See BENCHMARKS.md for detailed methodology and results.
Memory pools provide benefits beyond raw speed:
- Predictable Latency: No allocation spikes or fragmentation slowdowns
- Cache Locality: Objects stored contiguously improve cache hit rates
- Reduced Fragmentation: Eliminates long-term heap fragmentation
- Real-Time Guarantees: Bounded worst-case allocation time
Best use cases:
- High allocation/deallocation churn (game entities, particles)
- Real-time systems requiring bounded latency
- Embedded systems with constrained memory
- Long-running processes avoiding fragmentation
Note: Modern system allocators (jemalloc, mimalloc) are highly optimized. Pools excel in specific scenarios rather than universally. Always benchmark your specific workload.
use fastalloc::{GrowingPool, PoolConfig, GrowthStrategy};
let config = PoolConfig::builder()
.capacity(100)
.max_capacity(Some(1000))
.growth_strategy(GrowthStrategy::Exponential { factor: 2.0 })
.alignment(64) // Cache-line aligned
.build()
.unwrap();
let pool = GrowingPool::with_config(config).unwrap();use fastalloc::ThreadSafePool;
use std::sync::Arc;
use std::thread;
let pool = Arc::new(ThreadSafePool::<i32>::new(1000).unwrap());
let mut handles = vec![];
for i in 0..4 {
let pool_clone = Arc::clone(&pool);
handles.push(thread::spawn(move || {
let handle = pool_clone.allocate(i * 100).unwrap();
*handle
}));
}
for handle in handles {
println!("Result: {}", handle.join().unwrap());
}use fastalloc::{PoolConfig, InitializationStrategy};
let config = PoolConfig::builder()
.capacity(100)
.reset_fn(
|| Vec::with_capacity(1024),
|v| v.clear(),
)
.build()
.unwrap();use fastalloc::FixedPool;
let pool = FixedPool::new(1000).unwrap();
// Allocate multiple objects efficiently in one operation
let values = vec![1, 2, 3, 4, 5];
let handles = pool.allocate_batch(values).unwrap();
assert_eq!(handles.len(), 5);
// All handles automatically returned when dropped#[cfg(feature = "stats")]
{
use fastalloc::FixedPool;
let pool = FixedPool::<i32>::new(100).unwrap();
// ... use pool ...
let stats = pool.statistics();
println!("Utilization: {:.1}%", stats.utilization_rate());
println!("Total allocations: {}", stats.total_allocations);
}| Pool Type | Thread Safety | Growth | Overhead | Best For |
|---|---|---|---|---|
| FixedPool | ❌ | Fixed | Minimal | Single-threaded, predictable load |
| GrowingPool | ❌ | Dynamic | Low | Variable workloads |
| ThreadLocalPool | Fixed | Minimal | High-throughput parallel | |
| ThreadSafePool | ✅ | Fixed | Medium | Shared state, moderate contention |
Pre-allocated fixed-size pool with O(1) operations and zero fragmentation.
let pool = FixedPool::<i32>::new(1000).unwrap();When to use: Known maximum capacity, need absolute predictability
Dynamic pool that grows based on demand according to a configurable strategy.
let pool = GrowingPool::with_config(config).unwrap();When to use: Variable load, want automatic scaling
Per-thread pool that avoids synchronization overhead.
let pool = ThreadLocalPool::<i32>::new(100).unwrap();When to use: Rayon/parallel iterators, zero-contention needed
Lock-based concurrent pool safe for multi-threaded access.
let pool = ThreadSafePool::<i32>::new(1000).unwrap();When to use: Shared pool across threads, moderate contention acceptable
Enable optional features in your Cargo.toml:
[dependencies]
fastalloc = { version = "1.0", features = ["stats", "serde", "parking_lot"] }Available features:
| Feature | Description | Performance Impact |
|---|---|---|
std (default) |
Standard library support | N/A |
stats |
Pool statistics & monitoring | ~2% overhead |
serde |
Serialization support | None when unused |
parking_lot |
Faster mutex (vs std::sync) | 10-20% faster locking |
crossbeam |
Lock-free data structures | 30-50% better under contention |
tracing |
Structured instrumentation | Minimal when disabled |
lock-free |
Experimental lock-free pool | 2-3x faster (requires crossbeam) |
fastalloc works in no_std environments:
[dependencies]
fastalloc = { version = "1.0", default-features = false }Run benchmarks with:
cargo benchBenchmark results are available in the target/criterion directory after running the benchmarks.
Full API documentation is available on docs.rs.
Explore the examples/ directory for more usage examples:
basic_usage.rs- Basic pool usagethread_safe.rs- Thread-safe poolingcustom_allocator.rs- Implementing custom allocation strategiesembedded.rs- no_std usage example
See CHANGELOG.md for a detailed list of changes in each version.
We welcome contributions of all kinds! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
- Read our Code of Conduct
- Check out the open issues
- Fork the repository and create your feature branch
- Make your changes and add tests
- Ensure all tests pass and code is properly formatted
- Submit a pull request with a clear description of your changes
# Clone the repository
git clone https://github.com/TIVerse/fastalloc.git
cd fastalloc
# Install development dependencies
rustup component add rustfmt clippy
# Run tests
cargo test --all-features
# Run benchmarks
cargo bench
# Run lints
cargo clippy --all-targets -- -D warnings
cargo fmt -- --check
# Check for unused dependencies
cargo +nightly udeps
# Check for security vulnerabilities
cargo auditSecurity is important to us. If you discover any security related issues:
- Do NOT open a public GitHub issue
- Email the maintainer directly: eshanized@proton.me
- Include:
- Detailed description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
We will acknowledge receipt within 48 hours and provide a timeline for a fix. Security issues will be prioritized and patched in expedited releases.
See SECURITY.md for our full security policy.
Licensed under either of:
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
- The Rust community for creating an amazing ecosystem
- All contributors who have helped improve this project
- Inspired by various memory pooling techniques and existing implementations
- Built with ❤️ and Rust
We're building a list of projects using fastalloc. If you're using it, please consider adding your project!
Open Source Projects:
- Your project here! - Open a PR to add your project
Use Cases in Production:
- Game engines (entity/component systems, particle effects)
- Real-time audio processing pipelines
- High-frequency trading systems
- Embedded robotics control loops
- IoT device firmware
- Web server request pooling
Research & Education:
- Memory management tutorials
- Rust performance optimization courses
- Embedded systems projects
Want to be listed? Open a PR or issue with your project details!
- API Documentation - Complete API reference with examples
- BENCHMARKS.md - Real benchmark results, methodology, and library comparisons
- SAFETY.md - Memory safety guarantees and unsafe code documentation
- ARCHITECTURE.md - Internal design and implementation details
- ERROR_HANDLING.md - Pool exhaustion strategies and error recovery
- CHANGELOG.md - Version history and breaking changes
- CONTRIBUTING.md - How to contribute to the project
- SECURITY.md - Security policy and vulnerability reporting
- examples/ - Working code examples:
basic_usage.rs- Getting started with FixedPoolthread_safe.rs- Concurrent pool usagecustom_allocator.rs- Custom allocation strategiesgame_entities.rs- Game entity pooling exampleparticle_system.rs- High-performance particle systemasync_usage.rs- Using pools with async/awaitembedded.rs- no_std embedded examplestatistics.rs- Pool monitoring and statistics
See CHANGELOG.md for version history.