Golang implementation of the Raft consensus protocol

raft CircleCI

raft is a Go library that manages a replicated log and can be used with an FSM to manage replicated state machines. It is a library for providing consensus.

The use cases for such a library are far-reaching, such as replicated state machines which are a key component of many distributed systems. They enable building Consistent, Partition Tolerant (CP) systems, with limited fault tolerance as well.

Building

If you wish to build raft you'll need Go version 1.2+ installed.

Please check your installation with:

go version

Documentation

For complete documentation, see the associated Godoc.

To prevent complications with cgo, the primary backend MDBStore is in a separate repository, called raft-mdb. That is the recommended implementation for the LogStore and StableStore.

A pure Go backend using BoltDB is also available called raft-boltdb. It can also be used as a LogStore and StableStore.

Tagged Releases

As of September 2017, HashiCorp will start using tags for this library to clearly indicate major version updates. We recommend you vendor your application's dependency on this library.

  • v0.1.0 is the original stable version of the library that was in master and has been maintained with no breaking API changes. This was in use by Consul prior to version 0.7.0.

  • v1.0.0 takes the changes that were staged in the library-v2-stage-one branch. This version manages server identities using a UUID, so introduces some breaking API changes. It also versions the Raft protocol, and requires some special steps when interoperating with Raft servers running older versions of the library (see the detailed comment in config.go about version compatibility). You can reference https://github.com/hashicorp/consul/pull/2222 for an idea of what was required to port Consul to these new interfaces.

    This version includes some new features as well, including non voting servers, a new address provider abstraction in the transport layer, and more resilient snapshots.

Protocol

raft is based on "Raft: In Search of an Understandable Consensus Algorithm"

A high level overview of the Raft protocol is described below, but for details please read the full Raft paper followed by the raft source. Any questions about the raft protocol should be sent to the raft-dev mailing list.

Protocol Description

Raft nodes are always in one of three states: follower, candidate or leader. All nodes initially start out as a follower. In this state, nodes can accept log entries from a leader and cast votes. If no entries are received for some time, nodes self-promote to the candidate state. In the candidate state nodes request votes from their peers. If a candidate receives a quorum of votes, then it is promoted to a leader. The leader must accept new log entries and replicate to all the other followers. In addition, if stale reads are not acceptable, all queries must also be performed on the leader.

Once a cluster has a leader, it is able to accept new log entries. A client can request that a leader append a new log entry, which is an opaque binary blob to Raft. The leader then writes the entry to durable storage and attempts to replicate to a quorum of followers. Once the log entry is considered committed, it can be applied to a finite state machine. The finite state machine is application specific, and is implemented using an interface.

An obvious question relates to the unbounded nature of a replicated log. Raft provides a mechanism by which the current state is snapshotted, and the log is compacted. Because of the FSM abstraction, restoring the state of the FSM must result in the same state as a replay of old logs. This allows Raft to capture the FSM state at a point in time, and then remove all the logs that were used to reach that state. This is performed automatically without user intervention, and prevents unbounded disk usage as well as minimizing time spent replaying logs.

Lastly, there is the issue of updating the peer set when new servers are joining or existing servers are leaving. As long as a quorum of nodes is available, this is not an issue as Raft provides mechanisms to dynamically update the peer set. If a quorum of nodes is unavailable, then this becomes a very challenging issue. For example, suppose there are only 2 peers, A and B. The quorum size is also 2, meaning both nodes must agree to commit a log entry. If either A or B fails, it is now impossible to reach quorum. This means the cluster is unable to add, or remove a node, or commit any additional log entries. This results in unavailability. At this point, manual intervention would be required to remove either A or B, and to restart the remaining node in bootstrap mode.

A Raft cluster of 3 nodes can tolerate a single node failure, while a cluster of 5 can tolerate 2 node failures. The recommended configuration is to either run 3 or 5 raft servers. This maximizes availability without greatly sacrificing performance.

In terms of performance, Raft is comparable to Paxos. Assuming stable leadership, committing a log entry requires a single round trip to half of the cluster. Thus performance is bound by disk I/O and network latency.

Owner
HashiCorp
Consistent workflows to provision, secure, connect, and run any infrastructure for any application.
HashiCorp
Comments
  • Rework goroutines and synchronization

    Rework goroutines and synchronization

    Today, the division of work and the synchronization between goroutines gets to be hard to follow in places. I think we can do better, to make the library more maintainable and eliminate potential race conditions from accidentally shared state. Ideally, it'll become more unit testable too.

    This commit includes a diagram and description of where I think we should go. I'm open to feedback on it. Some of it's probably underspecified, with details to be determined as we implement more; questions are fair game too.

    I held back on subdividing the main Raft module into a nonblocking goroutine and blocking helpers, but it's something we could consider. I haven't studied the code enough to know whether that'd be feasible or advantageous.

    The transition from here to there is going to take significant effort. Here are a few of the major differences:

    • Peer is structured completely differently from replication.go today.
    • Peer handles all communication including RequestVote, not just AppendEntries/InstallSnapshot as replication.go does today.
    • Fewer locks and shared state. commitment.go and raftstate.go remove locking/atomics, possibly merge into raft.go. Other goroutines don't get a handle to the Raft module's state.
    • Snapshots are created through a different flow.

    I started on the replication.go/peer.go changes, but it was before I had a good idea of where things were heading. I'll be happy to pick that up again later.

    /cc @superfell @cstlee @bmizerany @kr @slackpad @sean- hashicorp/raft#84

  • Cleanup Meta Ticket

    Cleanup Meta Ticket

    Here are the list of issues, grouped together if they might make sense as a single PR.

    State Races

    • [x] Raft state should use locks for any fields that are accessed together (e.g. index and term)

    Multi-Row Fetching

    • [ ] ~~Replace single row lookups with multi row lookups (LogStore / LogCache) (look at cases around log truncation)~~
    • [x] Verify the current term has not changed when preparing/processing the AppendEntries message #136

    Follower Replication:

    • [x] replicateTo should verify leadership is current during looping
    • [x] Check for any hot loops that do not break on stopCh

    Change Inflight Tracking

    • [x] Remove majorityQuorum
    • [x] Inflight tracker should map Node -> Last Commit Index (match index)
    • [x] Votes should be ignored from peers that are not part of peer set
    • [x] precommit may not be necessary with new inflight (likely will be cleaned up via #117)

    Improve Membership Tracking

    • [x] Peer changes should have separate channel and do not pipeline (we don't want more than one peer change in flight at a time) #117
    • [x] Peers.json should track index and any AddPeer or RemovePeer are ignored from older indexes - #117

    Crashes / Restart Issues

    • [ ] ~~Panic with old snapshots #85~~
    • [ ] ~~TrailingLogs set to 0 with restart bug #86~~

    New Tests

    • [x] Config change under loss of quorum: #127
    • Setup cluster with {A, B, C, D}
    • Assume leader is A
    • Partition {C, D}
    • Remove {B}
    • Test should fail to remove B (quorum cannot be reached)

    /cc: @superfell @ongardie @sean- @ryanuber @slackpad

  • Adds in-place upgrade and manual recovery support.

    Adds in-place upgrade and manual recovery support.

    This adds several important capabilities to help in upgrading to the new Raft protocol version:

    1. We can migrate an existing peers.json file, which is sometimes the source of truth for the old version of the library before this support was moved to be fully in snapshots + raft log as the official source.
    2. If we are using protocol version 0 where we don't support server IDs, operators can continue to use peers.json as an interface to manually recover from a loss of quorum.
    3. We left ourselves open for a more full-featured recovery manager by giving a new RecoverCluster interface access to a complete Configuration object to consume. This will allow us to manually pick which server is a voter for manual elections (set 1 to a voter and the rest to nonvoters, the 1 voter will elect itself), as well as basically any other configuration we want to set.

    This also gives a path for introducing Raft servers running the new version of the library into a cluster running the old code. Things would work like this:

    // These are the versions of the protocol (which includes RPC messages as
    // well as Raft-specific log entries) that this server can _understand_. Use
    // the ProtocolVersion member of the Config object to control the version of
    // the protocol to use when _speaking_ to other servers. This is not currently
    // written into snapshots so they are unversioned. Note that depending on the
    // protocol version being spoken, some otherwise understood RPC messages may be
    // refused. See isVersionCompatible for details of this logic.
    //
    // There are notes about the upgrade path in the description of the versions
    // below. If you are starting a fresh cluster then there's no reason not to
    // jump right to the latest protocol version. If you need to interoperate with
    // older, version 0 Raft servers you'll need to drive the cluster through the
    // different versions in order.
    //
    // The version details are complicated, but here's a summary of what's required
    // to get from an version 0 cluster to version 3:
    //
    // 1. In version N of your app that starts using the new Raft library with
    //    versioning, set ProtocolVersion to 1.
    // 2. Make version N+1 of your app require version N as a prerequisite (all
    //    servers must be upgraded). For version N+1 of your app set ProtocolVersion
    //    to 2.
    // 3. Similarly, make version N+2 of your app require version N+1 as a
    //    prerequisite. For version N+2 of your app, set ProtocolVersion to 3.
    //
    // During this upgrade, older cluster members will still have Server IDs equal
    // to their network addresses. To upgrade an older member and give it an ID, it
    // needs to leave the cluster and re-enter:
    //
    // 1. Remove the server from the cluster with RemoveServer, using its network
    //    address as its ServerID.
    // 2. Update the server's config to a better ID (restarting the server).
    // 3. Add the server back to the cluster with AddVoter, using its new ID.
    //
    // You can do this during the rolling upgrade from N+1 to N+2 of your app, or
    // as a rolling change at any time after the upgrade.
    //
    // Version History
    //
    // 0: Original Raft library before versioning was added. Servers running this
    //    version of the Raft library use AddPeerDeprecated/RemovePeerDeprecated
    //    for all configuration changes, and have no support for LogConfiguration.
    // 1: First versioned protocol, used to interoperate with old servers, and begin
    //    the migration path to newer versions of the protocol. Under this version
    //    all configuration changes are propagated using the now-deprecated
    //    RemovePeerDeprecated Raft log entry. This means that server IDs are always
    //    set to be the same as the server addresses (since the old log entry type
    //    cannot transmit an ID), and only AddPeer/RemovePeer APIs are supported.
    //    Servers running this version of the protocol can understand the new
    //    LogConfiguration Raft log entry but will never generate one so they can
    //    remain compatible with version 0 Raft servers in the cluster.
    // 2: Transitional protocol used when migrating an existing cluster to the new
    //    server ID system. Server IDs are still set to be the same as server
    //    addresses, but all configuration changes are propagated using the new
    //    LogConfiguration Raft log entry type, which can carry full ID information.
    //    This version supports the old AddPeer/RemovePeer APIs as well as the new
    //    ID-based AddVoter/RemoveServer APIs which should be used when adding
    //    version 3 servers to the cluster later. This version sheds all
    //    interoperability with version 0 servers, but can interoperate with newer
    //    Raft servers running with protocol version 1 since they can understand the
    //    new LogConfiguration Raft log entry, and this version can still understand
    //    their RemovePeerDeprecated Raft log entries. We need this protocol version
    //    as an intermediate step between 1 and 3 so that servers will propagate the
    //    ID information that will come from newly-added (or -rolled) servers using
    //    protocol version 3, but since they are still using their address-based IDs
    //    from the previous step they will still be able to track commitments and
    //    their own voting status properly. If we skipped this step, servers would
    //    be started with their new IDs, but they wouldn't see themselves in the old
    //    address-based configuration, so none of the servers would think they had a
    //    vote.
    // 3: Protocol adding full support for server IDs and new ID-based server APIs
    //    (AddVoter, AddNonvoter, etc.), old AddPeer/RemovePeer APIs are no longer
    //    supported. Version 2 servers should be swapped out by removing them from
    //    the cluster one-by-one and re-adding them with updated configuration for
    //    this protocol version, along with their server ID. The remove/add cycle
    //    is required to populate their server ID. Note that removing must be done
    //    by ID, which will be the old server's address.
    
    // These are versions of snapshots that this server can _understand_. Currently,
    // it is always assumed that this server generates the latest version, though
    // this may be changed in the future to include a configurable version.                                                                                                              //
    // Version History
    //
    // 0: Original Raft library before versioning was added. The peers portion of
    //    these snapshots is encoded in the legacy format which requires decodePeers
    //    to parse. This version of snapshots should only be produced by the
    //    unversioned Raft library.
    // 1: New format which adds support for a full configuration structure and its
    //    associated log index, with support for server IDs and non-voting server
    //    modes. To ease upgrades, this also includes the legacy peers structure but
    //    that will never be used by servers that understand version 1 snapshots.
    //    Since the original Raft library didn't enforce any versioning, we must
    //    include the legacy peers structure for this version, but we can deprecate
    //    it in the next snapshot version.
    

    This isn't super great, but will give us a path to keep things compatible with existing clusters as we roll out the changes. We can make some higher-level tooling in Consul to help orchestrate this.

  • [v2] Rejecting vote request... since we have a leader

    [v2] Rejecting vote request... since we have a leader

    I am using the v2-stage-one branch and while everything seems to work fine for the most part, I do have one issue:

    I have a cluster of 3 nodes. I take one node down gracefully (used consul as an example of leave/shutdown logic, and waiting for changes to propagate) and the cluster maintains itself at 2 nodes. If I then, however, try to restart the same node (with any combination of ServerID and addr:port), the new node sits there and requests a leader vote forever, with the other two nodes logging [WARN] raft: Rejecting vote request from ... since we already have a leader

    I used Consul as an example of the implementation, fwiw.

  • buffer applyCh with up to conf.MaxAppendEntries

    buffer applyCh with up to conf.MaxAppendEntries

    This change improves throughput in busy Raft clusters. By buffering messages, individual RPCs contain more Raft messages. In my tests, this improves throughput from about 4.5 kqps to about 5.5 kqps.

    As-is: n1-standard-8-c8deaa9d333f69fb56c8935036e7ca5c-no-buffer

    With my change: n1-standard-8-f2fed4ebe05df23eb322c805b503a144

    (Both tests were performed with 3 n1-standard-8 nodes on Google Compute Engine in the europe-west1-d region.)

  • Library needs more active maintenance.

    Library needs more active maintenance.

    bunch of useful PR's are just sitting unmerged with no comments from maintainers about when they'll be merged.

    its gotten to the point I've been considering having to fork the library and merge them manually to get the benefit of everyones work.

  • Fixes races with Raft configurations member, adds GetConfiguration accessor.

    Fixes races with Raft configurations member, adds GetConfiguration accessor.

    This adds GetConfiguration() which is item 7 from https://github.com/hashicorp/raft/issues/84#issuecomment-228928110. This also resolves item 12.

    While adding that I realized that we should move the "no snapshots during config changes" outside the FSM snapshot thread to fix racy access to the configurations and also because that's not really a concern of the FSM. That thread just exists to let the FSM do its thing, but the main snapshot thread is the one that should know about the configuration.

    ~~To add an external GetConfiguration() API I added an RW mutex which I'm not super thrilled about, but it lets us correctly manipulate this state from the outside, and among the different Raft threads.~~

  • add FileStore (like InmemStore, but persistent)

    add FileStore (like InmemStore, but persistent)

    This is a trivial implementation of a persistent logstore. It mostly follows InmemStore, except it writes to files (one file per log entry, one file per stablestore key).

    As you write in the README, it may be desirable to avoid the cgo dependency which raft-mdb brings, which is why I wrote this ;).

    With regards to stability, I’m running this code in a pet project of mine without noticing any issues. Additionally, I’ve swapped the InmemStore for my FileStore in all *_test.go files in the raft repository and successfully ran the tests.

    With regards to performance, I’ve run a couple of benchmarks (end-to-end messages/s in the actual application I’m working on) using raft-mdb vs. this FileStore, with various little tweaks:

    All tests were run by measuring the messages/s over 13 runs, then averaging the results. The underlying storage is an INTEL SSDSC2BP48 (480G consumer SSD).

    raft-mdb+sync+cache:   557 msgs/s (recommended backend)
    filestore:             736 msgs/s (no fsync!)
    filestore+cache:      1131 msgs/s (no fsync!)
    filestore+sync:        418 msgs/s
    filestore+sync+cache:  516 msgs/s
    filestore+rename:      718 msgs/s (!)
    

    filestore+cache is FileStore, but with a proof-of-concept cache: every log entry is kept in memory and GetLog() just copies the log entry instead of reading from disk. This is obviously much faster, but a real cache would need to be developed. I’m thinking keeping config.MaxAppendEntries in a ringbuffer should fit the access pattern quite well.

    filestore+sync is FileStore, but with defer f.Sync() after defer f.Close() in StoreLogs().

    filestore+sync+cache is the combination of both of the above.

    filestore+rename is FileStore, but writing into a temporary file which is then renamed to its final path. I’m curious to hear what you have to say about that. The semantics this code guarantees are that a log entry is either fully present or not present at all (in case of power loss for example). AIUI, the raft protocol should be able to cope with this situation. With regards to performance, this outperforms the current raft-mdb (1.2x).

    Perhaps it would even make sense to replace InmemStore by FileStore entirely — you’re saying InmemStore should only be used for unit tests, and FileStore can do that, too. That way, people wouldn’t even have the chance to abuse InmemStore and use it in production :).

    But that can wait for follow-up commits. For now, I’m mostly interested to see whether you’d want to merge this? I don’t think every project (which doesn’t want to use raft-mdb) should need to implement this independently :).

  • Thread saturation metrics 📈

    Thread saturation metrics 📈

    Adds metrics suggested in #488, to record the percentage of time the main and FSM goroutines are busy with work vs available to accept new work, to give operators an idea of how close they are to hitting capacity limits.

    We update gauges (at most) once a second, possibly less if the goroutines are idle. This should be ok because it's unlikely that a goroutine would go from very high saturation to being completely idle; so at worst we'll leave the gauge on the previous (low) value for a while.

  • [FIXED] LogCache should not cache on StoreLogs error

    [FIXED] LogCache should not cache on StoreLogs error

    LogCache was caching logs and then invoking store.StoreLogs(). However, if StoreLogs() fails to store, LogCache.Get() could still return logs that had not been persisted into storage, which could lead to a node failing to restart with a "log not found" panic.

    This PR ensures that LogCache only cache logs if StoreLogs was successful.

    Resolves #429

    Signed-off-by: Ivan Kozlovic [email protected]

  • What happens when FSM.Apply fails?

    What happens when FSM.Apply fails?

    The function looks like Apply(*Log) interface{}. It does not support returning an error. What is the best practice for treating failure scenarios in this case? If Apply fails in your FSM implementation, what do you do? How do you notify Raft that this failed?

  • New node not receiving raft.Configuration after leader calls AddVoter

    New node not receiving raft.Configuration after leader calls AddVoter

    Consider the following:

    Node A constructs a new Raft instance, calling BootstrapCluster to provide an initial server list consisting just of itself (no other nodes). The log output looks like this:

    raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:A Address:A}]"
    

    Some time passes and now I want to add Node B as a voter. B has never participated in the quorum and has no logs/snapshots/other persisted state - it's a clean slate.

    From what I've gathered, I need to call AddVoter on A. I do this and get the following log output:

    raft: updating configuration: command=AddVoter server-id=... server-addr=... servers="[{Suffrage:Voter ID:A Address:A}{Suffrage:Voter ID:B Address:B}]"                 
    raft: added peer, starting replication: peer=B
    

    It seems that the change is accepted. But now AFAIK I should not call BootstrapCluster with B as as it would conflict with the newly commited Configuration of A, which has the state we want (A and B in a happy little quorum).

    I'm using Libp2pTransport as a transport layer. I set up B with a raft.Transport that has A in its peerstore, and do the same with B in A's peerstore:

    hostA.Peerstore().AddAddrs(b.ID, b.Addrs, peerstore.PermanentAddrTTL)
    

    And then calling NewRaft() on B, I get the following on A:

    raft: failed to contact: server-id=B time=501.412479ms                              
    raft: failed to contact quorum of nodes, stepping down
    

    And on B:

    raft: initial configuration: index=0 servers=[]                                                                                        
    Assignment succeeded; type now  ELECTABLE
    

    If I instead try to also bootstrap B with both nodes in the servers part of Configuration, I get this on A:

    raft: appendEntries rejected, sending older logs: peer="{Voter B B}" next=2
    raft: pipelining replication: peer="{Voter B B}"
    

    And this on B:

    raft: entering follower state: follower="Node at B [Follower]" leader-address= leader-id=
    raft: failed to get previous log: previous-index=3 last-index=1 error="log not found" 
    

    Can you tell what's going wrong here?

  • How to bootstrap a raft cluster with a fixed node setting?

    How to bootstrap a raft cluster with a fixed node setting?

    Hi, All. I'm new to raft.

    In my use case, I use a fixed node setting in a config file:

    [raft]
    bind_address = "localhost:11000"
    nodes = [
        "localhost:11000",
        "localhost:11001",
    ]
    

    All raft nodes will have the same nodes config, so they can start voting as soon as they start running. So if I have three nodes (A, B, and C), and their config should have the same nodes settings in the config file (i.e. nodes = ["A", "B", "C"]).

    And the second time I bootstrapped the cluster, I got the bootstrap only works on new clusters error message. I know that the bootstrap should be done once, but how can I support the fixed raft nodes config?

    For example, If I change the above config into

    [raft]
    bind_address = "localhost:11000"
    nodes = [
        "localhost:11000",
        "localhost:11002", # Changed
    ]
    

    In the next raft run, the raft should ask for 11000 and 11002 for data exchange, the 11001 should automatically be removed from raft.Configuration. You can think of the raft nodes config is tied to the raft.Configuration.

  • raft.GetConfiguration() - runtime error: invalid memory address or nil pointer dereference

    raft.GetConfiguration() - runtime error: invalid memory address or nil pointer dereference

    raft.GetConfiguration() funtion throwing runtime error: invalid memory address or nil pointer dereference panic when calling at server startup.

    2022/11/16 16:50:35 http: panic serving [127.0.0.1:PORT](https://127.0.0.1:PORT/): runtime error: invalid memory address or nil pointer dereference
    goroutine 217 [running]:
    net/http.(*conn).serve.func1(0xc000502280)
    	/usr/lib/go/src/net/http/server.go:1800 +0x142
    panic(0x7f8c067fd940, 0x7f8c06e295e0)
    	/usr/lib/go/src/runtime/panic.go:975 +0x3f7
    [github.com/hashicorp/raft.(*Raft).getLatestConfiguration(0x0](https://github.com/hashicorp/raft.(*Raft).getLatestConfiguration(0x0), 0x1, 0xc0000a8ba0, 0xc00054d4d0)
    	<Path>/github.com/hashicorp/raft/raft.go:1825 +0x13
    [github.com/hashicorp/raft.(*Raft).GetConfiguration(0x0](https://github.com/hashicorp/raft.(*Raft).GetConfiguration(0x0), 0x65646f6e2f676e69, 0x7a73)
    

    Do we need a nil check for latestConfiguration here ?https://github.com/hashicorp/raft/blob/6b4e32088e0bda22ea219fc89b0ee47f420e2b0b/raft.go#L1989

    Works fine after server boot up.

  • Raft pre-vote extension implementation

    Raft pre-vote extension implementation

    This is an idea to implement pre-vote without breaking backward compatibility I'm exploring.

    Pre-vote, based on the original raft thesis, is an extension to raft that help reduce term inflation and cluster instability when a node is partitioned from the cluster for a while and then is back online.

    In normal election when a node have no leader, it transition into candidate state. In that state the node will try to run an election by incrementing its term and sending a vote-request to the other nodes in the cluster.

    If the node win the election, the node become a leader and the cluster is stable again. But in the event that this node don't win the election, let's say because it's not connected to a quorum of nodes in the cluster, it will retry by incrementing it's term again and running another round of election. That could happen multiple times, which create a term inflation.

    Once the node is connected back to quorum of nodes it's almost certain that its term will be higher than the other nodes in the cluster, as it was running rounds of elections non stop, while most likely the remaining of the cluster was stable. This will lead to the node making the current raft leader to step-down and another round of election is run. That's not a desired behaviour because the cluster was already stable and that node most likely don't have the latest log, so it won't be able to win the election anyway.

    In this PR, a new bool called preVote is introduced to both the RequestVoteRequest and RequestVoteResponse. When a node transition to a candidate state and pre-vote is activated in the config, it will run a round of pre-election by sending RequestVoteRequest with preVote set to true and a term equal its current term + 1 but without incrementing its current term.

    If the node that receive the request support pre-vote and have pre-vote activated, it will respond by sending a RequestVoteResponse with preVote set to true and grant its vote as if it's a normal election, but without incrementing its vote.

    Otherwise the node will respond by sending a RequestVoteResponse with preVote set to false and grant its vote as it does for normal election, including incrementing its term.

    The candidate node will count all the votes with preVote set to true, grantedPrevotes and all the votes with preVote set to false, grantedVotes.

    If grantedPrevotes is bigger than the needed votes the candidate will run a normal election.

    If grantedVotes is bigger than the needed votes the candidate will consider it self winning the election and transition to leader.

  • [COMPLIANCE] Update MPL-2.0 LICENSE

    [COMPLIANCE] Update MPL-2.0 LICENSE

    Hi there 👋

    This PR was auto-generated as part of an internal review of public repositories that are not in compliance with HashiCorp's licensing standards.

    Frequently Asked Questions

    Why am I getting this PR? This pull request was created because one or more of the following criteria was found:
    • This repo did not previously have a LICENSE file
    • A LICENSE file was present, but had a non-conforming name (e.g., license.txt)
    • A LICENSE file was present, but was missing an appropriate copyright statement

    More info is available in the RFC

    How do you determine the copyright date? The copyright date given in this PR is supposed to be the year the repository or project was created (whichever is older). If you believe the copyright date given in this PR is not valid, please reach out to:

    #proj-software-copyright

    I don't think this repo should be licensed under the terms of the Mozilla Public License 2.0. Who should I reach out to? If you believe this repository should not use an MPL 2.0 License, please reach out to [email protected]. Exemptions are considered on a case-by-case basis, but common reasons include if the project is co-managed by another entity that requires differing license terms, or if the project is part of an ecosystem that commonly uses a different license type (e.g., MIT or Apache 2.0).

    Please approve and merge this PR in a timely manner to keep this source code compliant with our OSS license agreement. If you have any questions or feedback, reach out to #proj-software-copyright.

    Thank you!


    Made with :heart: @HashiCorp

A naive implementation of Raft consensus algorithm.

This implementation is used to learn/understand the Raft consensus algorithm. The code implements the behaviors shown in Figure 2 of the Raft paper wi

Dec 3, 2021
This is my implementation of Raft consensus algorithm that I did for own learning.

This is my implementation of Raft consensus algorithm that I did for own learning. Please follow the link to learn more about raft consensus algorithm https://raft.github.io. And Soon, I will be developing same algorithm in Java as well

Jan 12, 2022
The TinyKV course builds a key-value storage system with the Raft consensus algorithm.
The TinyKV course builds a key-value storage system with the Raft consensus algorithm.

The TinyKV Course The TinyKV course builds a key-value storage system with the Raft consensus algorithm. It is inspired by MIT 6.824 and TiKV Project.

Nov 19, 2021
Raft: a consensus algorithm for managing a replicated log

Raft Consensus Algorithm Raft is a consensus algorithm for managing a replicated

Dec 20, 2021
Feb 3, 2022
Distributed disk storage database based on Raft and Redis protocol.
Distributed disk storage database based on Raft and Redis protocol.

IceFireDB Distributed disk storage system based on Raft and RESP protocol. High performance Distributed consistency Reliable LSM disk storage Cold and

Dec 31, 2022
An implementation of a distributed KV store backed by Raft tolerant of node failures and network partitions 🚣
An implementation of a distributed KV store backed by Raft tolerant of node failures and network partitions 🚣

barge A simple implementation of a consistent, distributed Key:Value store which uses the Raft Concensus Algorithm. This project launches a cluster of

Nov 24, 2021
⟁ Tendermint Core (BFT Consensus) in Go
⟁ Tendermint Core (BFT Consensus) in Go

Tendermint Byzantine-Fault Tolerant State Machines. Or Blockchain, for short. Branch Tests Coverage Linting master Tendermint Core is Byzantine Fault

Dec 26, 2022
This is a comprehensive system that simulate multiple servers’ consensus behavior at local machine using multi-process deployment.

Raft simulator with Golang This project is a simulator for the Raft consensus protocol. It uses HTTP for inter-server communication, and a job schedul

Jan 30, 2022
A feature complete and high performance multi-group Raft library in Go.
A feature complete and high performance multi-group Raft library in Go.

Dragonboat - A Multi-Group Raft library in Go / 中文版 News 2021-01-20 Dragonboat v3.3 has been released, please check CHANGELOG for all changes. 2020-03

Dec 30, 2022
A distributed MySQL binlog storage system built on Raft
A distributed MySQL binlog storage system built on Raft

What is kingbus? 中文 Kingbus is a distributed MySQL binlog store based on raft. Kingbus can act as a slave to the real master and as a master to the sl

Dec 31, 2022
A linearizability distributed database by raft and wisckey.

AlfheimDB A linearizability distributed database by raft and wisckey, which supports redis client. Build This project build by mage, you will need ins

Jul 18, 2022
Easy to use Raft library to make your app distributed, highly available and fault-tolerant
Easy to use Raft library to make your app distributed, highly available and fault-tolerant

An easy to use customizable library to make your Go application Distributed, Highly available, Fault Tolerant etc... using Hashicorp's Raft library wh

Nov 16, 2022
The pure golang implementation of nanomsg (version 1, frozen)
The pure golang implementation of nanomsg (version 1, frozen)

mangos NOTE: This is the legacy version of mangos (v1). Users are encouraged to use mangos v2 instead if possible. No further development is taking pl

Dec 7, 2022
A Golang implementation of the Umee network, a decentralized universal capital facility in the Cosmos ecosystem.

Umee A Golang implementation of the Umee network, a decentralized universal capital facility in the Cosmos ecosystem. Umee is a Universal Capital Faci

Jan 3, 2023
Golang implementation of distributed mutex on Azure lease blobs

Distributed Mutex on Azure Lease Blobs This package implements distributed lock available for multiple processes. Possible use-cases include exclusive

Jul 31, 2022
The Go language implementation of gRPC. HTTP/2 based RPC

gRPC-Go The Go implementation of gRPC: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information

Jan 7, 2023
A simple go implementation of json rpc 2.0 client over http

JSON-RPC 2.0 Client for golang A go implementation of an rpc client using json as data format over http. The implementation is based on the JSON-RPC 2

Dec 15, 2022
Simplified distributed locking implementation using Redis

redislock Simplified distributed locking implementation using Redis. For more information, please see examples. Examples import ( "fmt" "time"

Dec 24, 2022