Oxcache is a high-performance, production-grade two-level caching library for Rust, providing L1 (Moka in-memory cache) + L2 (Redis distributed cache) architecture.
|
Extreme Performance L1 in nanoseconds |
Zero-Code Changes One-line cache enable |
Auto Recovery Redis fault degradation |
Multi-Instance Sync Based on Pub/Sub |
Batch Optimization Smart batch writes |
- 🚀 Extreme Performance: L1 nanosecond response (P99 < 100ns), L1 millisecond response (P99 < 5ms)
- 🎯 Zero-Code Changes: Enable caching with a single
#[cached]macro - 🔄 Auto Recovery: Automatic degradation on Redis failure, WAL replay on recovery
- 🌐 Multi-Instance Sync: Pub/Sub + version-based invalidation synchronization
- ⚡ Batch Optimization: Intelligent batch writes for significantly improved throughput
- 🛡️ Production Grade: Complete observability, health checks, chaos testing verified
Add oxcache to your Cargo.toml:
[dependencies]
oxcache = "0.1"Note:
tokioandserdeare already included by default. If you need minimal dependencies, you can useoxcache = { version = "0.1", default-features = false }and add them manually.
Create a config.toml file:
[global]
default_ttl = 3600
health_check_interval = 30
serialization = "json"
enable_metrics = true
[services.user_cache]
cache_type = "two-level" # "l1" | "l2" | "two-level"
ttl = 600
[services.user_cache.l1]
max_capacity = 10000
ttl = 300 # L1 TTL must be <= L2 TTL
tti = 180
initial_capacity = 1000
[services.user_cache.l2]
mode = "standalone" # "standalone" | "sentinel" | "cluster"
connection_string = "redis://127.0.0.1:6379"
[services.user_cache.two_level]
write_through = true
promote_on_hit = true
enable_batch_write = true
batch_size = 100
batch_interval_ms = 50use oxcache::macros::cached;
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize, Clone, Debug)]
struct User {
id: u64,
name: String,
}
// One-line cache enable
#[cached(service = "user_cache", ttl = 600)]
async fn get_user(id: u64) -> Result<User, String> {
// Simulate slow database query
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
Ok(User {
id,
name: format!("User {}", id),
})
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize cache (from config file)
oxcache::init("config.toml").await?;
// First call: execute function logic + cache result (~100ms)
let user = get_user(1).await?;
println!("First call: {:?}", user);
// Second call: return directly from cache (~0.1ms)
let cached_user = get_user(1).await?;
println!("Cached call: {:?}", cached_user);
Ok(())
}use oxcache::{get_client, CacheOps};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
oxcache::init("config.toml").await?;
let client = get_client("user_cache")?;
// Standard operation: write to both L1 and L2
client.set("key", &my_data, Some(300)).await?;
let data: MyData = client.get("key").await?.unwrap();
// Write to L1 only (temporary data)
client.set_l1_only("temp_key", &temp_data, Some(60)).await?;
// Write to L2 only (shared data)
client.set_l2_only("shared_key", &shared_data, Some(3600)).await?;
// Delete
client.delete("key").await?;
Ok(())
}#[cached(service = "user_cache", ttl = 600)]
async fn get_user_profile(user_id: u64) -> Result<UserProfile, Error> {
database::query_user(user_id).await
}#[cached(
service = "api_cache",
ttl = 300,
key = "api_{endpoint}_{version}"
)]
async fn fetch_api_data(endpoint: String, version: u32) -> Result<ApiResponse, Error> {
http_client::get(&format!("/api/{}/{}", endpoint, version)).await
}#[cached(service = "session_cache", cache_type = "l1", ttl = 60)]
async fn get_user_session(session_id: String) -> Result<Session, Error> {
session_store::load(session_id).await
}┌─────────────────────────────────────────────────────────┐
│ Application Code │
│ (#[cached] Macro) │
└──────────────────────┬──────────────────────────────────┘
│
↓
┌─────────────────────────────────────────────────────────┐
│ CacheManager │
│ (Service Registry + Health Monitor) │
└───┬─────────────────────────────────────────────────┬───┘
│ │
↓ ↓
┌──────────────┐ ┌──────────────┐
│ TwoLevelClient│ │ L1OnlyClient │
│ │ │ L2OnlyClient │
└───┬──────┬───┘ └──────────────┘
│ │
↓ ↓
┌────────┐ ┌────────────────────────────────────────┐
│ L1 │ │ L2 │
│ (Moka) │ │ (Redis) │
│ │ │ │
└────────┘ └────────────────────────────────────────┘
L1: In-process high-speed cache using LRU/TinyLFU eviction strategy
L2: Distributed shared cache supporting Sentinel/Cluster modes
Test environment: M1 Pro, 16GB RAM, macOS
Single-thread Latency Test (P99):
├── L1 Cache: ~50ns
├── L2 Cache: ~1ms
└── Database: ~10ms
Throughput Test (batch_size=100):
├── Single Write: ~10K ops/s
└── Batch Write: ~50K ops/s
- ✅ Single-Flight (prevent cache stampede)
- ✅ WAL (Write-Ahead Log) persistence
- ✅ Automatic degradation on Redis failure
- ✅ Graceful shutdown mechanism
- ✅ Health checks and auto-recovery
Pull Requests and Issues are welcome!
See CHANGELOG.md
This project is licensed under MIT License. See LICENSE file.
If this project helps you, please give a ⭐ Star to show support!
Made with ❤️ by oxcache Team