A building block for Raft-powered applications. Dockyard provides the Raft state machine and calls out to your application to handle the fun parts.
A Makefile is provided here to simplify the build. To build and test, run:
makeA naive test implementation exists that uses three nodes,
local_test_12300(at address127.0.0.1:12300, server_00/config.json),local_test_12301(at address127.0.0.1:12301, server_01/config.json) andlocal_test_12302(at address127.0.0.1:12302, server_02/config.json).
👩💻 For trying it out, run make in the repository root, then execute ../../../_output/dockyard from each of the
configuration directories. This procedure is a bit involved, but it's a naive test after all. 🤷
While the first test based on [this example](lemonlatte/raft-example used BoltDB for storage, the Badger implementation of this example was migrated and updated to work with the newer Raft library. See the Storage section for reasons.
The Raft setup writes to /tmp/raft with subdirectories for each node (see the respective configuration files).
The first node, local_test_12300, will attempt to
bootstrap the Raft cluster with these three nodes as voters; this is currently baked into the application.
The Hashicorp Raft implementation will fail the call to Raft.BootstrapCluster()
if the cluster already exists, however this error is safe to ignore.
For a real-life implementation we should likely make this bootstrapping stage explicit and follow the Documentation on the suggested procedure:
- Attempt to bootstrap the cluster with only the current node as a voter,
- Wait for it to become elected leader,
- Call Raft.AddVoter() individually on each node.
For further management there's more functionality at
- Raft.AddNonvoter() for adding nodes that receive state but do not vote.
- Raft.DemoteVoter() for taking away a node's vote from the cluster (making it a non-voter, see above).
- Raft.RemoveServer() for removing a node from the cluster entirely.
- Raft.LeadershipTransfer() for transferring leadership to a new server; this appears similar to MongoDB's rs.stepDown() and might be good behavior when shutting down.
This project tries to follow the golang-standards/project-layout project layout.
Both BoltDB and Badger are key-value store databases that provide a Go integration. While BoltDB is based on LMDB, Badger is an original. According to some benchmarks and blog posts, Badger provides superior write performance (x7 to x100) when compared to BoltDB, with BoltDB outperforming Badger slightly (x2) on random reads.
We assume that Raft logs need to be written much more often than they need to be read (in random order), so having a slightly worse (random) read performance in favor of improved write performance appears to be an acceptable tradeoff.
The Shipyard icon is licensed under a CC BY 3.0 by Chanut is Industries.