A blazing-fast, distributed system combining Key-Value Store, Message Queue, Stream Processing, and Wide-Column Database in a single unified platform.
| Component | Throughput | Latency | Notes |
|---|---|---|---|
| KV Store | 319K reads/sec | 3.1ΞΌs | 3x faster than Redis |
| KV Store | 151K writes/sec | 6.6ΞΌs | Disk-backed durability |
| Message Queue | 104K push/sec | 9.6ΞΌs | Unified port with KV |
| Message Queue | 100K pop/sec | 10ΞΌs | BadgerDB persistence |
| Stream | High throughput | Low latency | Kafka-like pub/sub |
| Database | 76K inserts/sec | 13ΞΌs | Cassandra-like (CQL) |
- β 319K ops/sec read throughput
- β 151K ops/sec write throughput
- β Sub-10ΞΌs latency
- β Atomic batch operations (MSET/MGET/MDEL)
- β Dual storage: Disk (durable) or Memory (fastest)
- β Text + Binary protocols with auto-detection
- β Distributed clustering with Raft consensus
- β 104K ops/sec push throughput
- β 100K ops/sec pop throughput
- β Unified Port: Runs on same port as KV (7380)
- β Durable: Backed by BadgerDB
- β Atomic: Crash-safe metadata management
- β Multiple queues with independent operations
- β Kafka-like pub/sub messaging
- β Partitioned topics for scalability
- β Consumer groups with automatic rebalancing
- β Offset management for reliable delivery
- β Retention policies for automatic cleanup
- β At-least-once delivery semantics
- β 76K inserts/sec throughput
- β 13ΞΌs average latency
- β Cassandra-like architecture (Partition/Clustering keys)
- β FQL (Flin Query Language) schema support (CQL-compatible)
- β Prisma-like fluent query builder
- β Efficient Deep Pagination with secondary indexes
- β Flexible storage supporting both structured and semi-structured data
- β ACID transactions via BadgerDB
Flin uses a modular, layered architecture:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Client SDKs (Go, Python, etc) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Binary Protocol (Auto-detection) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Server Layer (Hybrid: Fast Path + Workers) β
β ββ KV Handlers β
β ββ Queue Handlers β
β ββ Stream Handlers β
β ββ Database Handlers β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β High-Level Abstraction Layer β
β ββ internal/kv (KV operations) β
β ββ internal/queue (Queue operations) β
β ββ internal/stream (Stream operations) β
β ββ internal/db (Database operations) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Storage Layer (BadgerDB) β
β ββ internal/storage/kv.go β
β ββ internal/storage/queue.go β
β ββ internal/storage/stream.go β
β ββ internal/storage/db.go β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β ClusterKit (Raft Consensus) β
β ββ Leader Election β
β ββ Partition Management β
β ββ Replication β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Single Node:
cd docker/single && ./run.sh3-Node Cluster:
cd docker/cluster && ./run.shBoth scripts automatically:
- Start the node(s)
- Run performance benchmarks
- Show throughput metrics
- Leave cluster running for testing
See docker/README.md for details.
git clone https://github.com/skshohagmiah/flin
cd flin
go build -o bin/flin-server ./cmd/server# Single node with all features
./bin/flin-server \
-node-id=node1 \
-http=:8080 \
-raft=:9080 \
-port=:7380 \
-data=./data/node1 \
-workers=256
# Join existing cluster
./bin/flin-server \
-node-id=node2 \
-http=:8081 \
-raft=:9081 \
-port=:7381 \
-data=./data/node2 \
-join=localhost:8080Flin provides a single, unified client for all operations.
import flin "github.com/skshohagmiah/flin/clients/go"
// Create unified client (connects to port 7380)
opts := flin.DefaultOptions("localhost:7380")
client, _ := flin.NewClient(opts)
defer client.Close()
// ============ π KV Store ============
client.KV.Set("user:1", []byte("John Doe"))
value, _ := client.KV.Get("user:1")
client.KV.Delete("user:1")
// Batch operations
client.KV.MSet([]string{"k1", "k2"}, [][]byte{[]byte("v1"), []byte("v2")})
values, _ := client.KV.MGet([]string{"k1", "k2"})
// ============ π¬ Message Queue ============
client.Queue.Push("tasks", []byte("Task 1"))
client.Queue.Push("tasks", []byte("Task 2"))
msg, _ := client.Queue.Pop("tasks")
fmt.Printf("Received: %s\n", string(msg))
// ============ π Stream Processing ============
// Create topic with 4 partitions and 7 days retention
client.Stream.CreateTopic("events", 4, 7*24*60*60*1000)
// Publish messages
client.Stream.Publish("events", -1, "user123", []byte(`{"action":"login"}`))
// Subscribe consumer group
client.Stream.Subscribe("events", "processors", "worker-1")
// Consume messages
messages, _ := client.Stream.Consume("events", "processors", "worker-1", 10)
for _, msg := range messages {
fmt.Printf("Partition %d, Offset %d: %s\n", msg.Partition, msg.Offset, msg.Value)
// Commit offset after processing
client.Stream.Commit("events", "processors", msg.Partition, msg.Offset+1)
}
// ============ π Document Database ============
// Register Schema (FQL) - Optional but recommended for performance
client.DB.RegisterFQL(`
CREATE TABLE users (
id uuid,
name text,
email text,
age int,
PRIMARY KEY (id)
);
`)
// Insert document
id, _ := client.DB.Insert("users", map[string]interface{}{
"name": "John Doe",
"email": "john@example.com",
"age": 30,
})
// Find documents (Prisma-like API)
users, _ := client.DB.Query("users").
Where("age", flin.Gte, 18).
Where("status", flin.Eq, "active").
OrderBy("created_at", flin.Desc).
Skip(0).
Take(10).
Exec()
// Update document
client.DB.Update("users").
Where("email", flin.Eq, "john@example.com").
Set("age", 31).
Set("verified", true).
Exec()
// Delete document
client.DB.Delete("users").
Where("status", flin.Eq, "inactive").
Exec()cd benchmarks
./kv-throughput.shResults:
- Read: 319K ops/sec (3.1ΞΌs latency)
- Write: 151K ops/sec (6.6ΞΌs latency)
- Batch: 792K ops/sec (1.26ΞΌs latency)
./queue-throughput.shResults:
- Push: 104K ops/sec (9.6ΞΌs latency)
- Pop: 100K ops/sec (10ΞΌs latency)
./stream-throughput.shResults:
- High throughput pub/sub
- Efficient partition management
- Low-latency message delivery
./db-throughput.shResults:
- Insert: 76K docs/sec (13ΞΌs latency)
- Query: Fast with secondary indexes
- Update: Efficient in-place updates
| Operation | Flin | Redis | Speedup |
|---|---|---|---|
| KV Read | 319K/s | ~100K/s | 3.2x |
| KV Write | 151K/s | ~80K/s | 1.9x |
| Queue Push | 104K/s | ~80K/s | 1.3x |
| Queue Pop | 100K/s | ~80K/s | 1.25x |
| Batch Ops | 792K/s | ~100K/s | 7.9x |
| Flag | Default | Description |
|---|---|---|
-node-id |
(required) | Unique node identifier |
-http |
:8080 |
HTTP API address |
-raft |
:9080 |
Raft consensus address |
-port |
:7380 |
Unified server port (KV+Queue+Stream+Doc) |
-data |
./data |
Data directory |
-workers |
64 |
Worker pool size |
-partitions |
64 |
Number of partitions |
-memory |
false |
Use in-memory storage (no persistence) |
-join |
(empty) | Address of node to join |
Disk Mode (Default):
- Durable persistence via BadgerDB
- Survives restarts
- Optimized for throughput
Memory Mode:
- Fastest performance
- Data lost on restart
- Use for caching/temporary data
# Memory mode
./bin/flin-server -node-id=node1 -port=:7380 -memoryflin/
βββ cmd/
β βββ server/ # Server entry point
βββ internal/
β βββ kv/ # KV store abstraction
β βββ queue/ # Queue abstraction
β βββ stream/ # Stream abstraction
β βββ db/ # Document store abstraction
β β βββ types.go # Type definitions
β β βββ query.go # Query builder
β β βββ helpers.go # Utility functions
β β βββ db.go # Main implementation
β βββ storage/ # Storage layer
β β βββ kv.go # KV BadgerDB ops
β β βββ queue.go # Queue BadgerDB ops
β β βββ stream.go # Stream BadgerDB ops
β β βββ db.go # Document BadgerDB ops
β βββ server/ # Server handlers
β βββ protocol/ # Binary protocol
β βββ net/ # Connection pooling
βββ clients/
β βββ go/ # Go client SDK
βββ benchmarks/ # Performance tests
βββ docker/ # Docker configs
Flin uses Raft consensus for:
- Leader election
- Log replication
- Partition management
- Automatic failover
3-Node Cluster Example:
# Node 1 (bootstrap)
./bin/flin-server -node-id=node1 -http=:8080 -raft=:9080 -port=:7380
# Node 2 (join)
./bin/flin-server -node-id=node2 -http=:8081 -raft=:9081 -port=:7381 -join=localhost:8080
# Node 3 (join)
./bin/flin-server -node-id=node3 -http=:8082 -raft=:9082 -port=:7382 -join=localhost:8080- Architecture Overview - End-to-end data flow
- Performance Summary - Detailed benchmarks
- Docker Deployment - Container setup
- Benchmarks - Performance tests
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE for details
- Built with BadgerDB for storage
- Uses ClusterKit for Raft consensus
- Inspired by Redis, Kafka, and MongoDB
Made with β€οΈ by the Flin team