Distributed Consensus and Raft - A Survey

Ziming Wang & Yihang Li | Jun 17, 2024 min read

  • This was part of my project work for COMP90020 (Distributed Algorithms) at the University of Melbourne.
  • Due to limitations of my static site generator, the bibliography format in this blog is pretty weird and not aligned to a formal reference style.

0 - Abstract

Distributed consensus is one of the foundations of fault-tolerant distributed systems. This paper firstly provides an overview of the theory foundations of distributed consensus, including the Byzantine Generals problem and the FLP impossibility result. It then discusses the Raft algorithm in depth, including its approach to leader election and log replication, and proposed enhancements to address limitations. This paper also conducts a comparative analysis of Paxos and Raft, focusing on their understandability, fault-tolerance and performance.

Keywords: Distributed Consensus, State Machine Replication, Paxos, Raft

1 - Consensus

Consensus is a fundamental problem in distributed systems. It refers to the process that members within a group collaboratively agree on some common decisions. Fault-tolerance is a critical factor in distributed systems, and consensus algorithms are expected to survive failures to some extent to achieve availability 1.

The paper by Pease et al. 2 published in 1980 is one of the pioneering works of consensus algorithms. It describes and formally proves the problem of interactive consistency - all correct processes can agree on a common vector despite the failures of some processes, as long as the total number of processes is more than three times of the number of the failure processes. Another foundation is the paper by Lamport et al. 3 in 1982, where the Byzantine Generals Problem is proposed. The original problem and its variations have greatly influenced the future directions of research in consensus and fault-tolerant distributed systems.

Consensus algorithms and many relevant problems aim to satisfy three important properties 4:

  • Termination: The algorithm should terminate eventually and all correct processes should have their agreement variable set in the end.
  • Agreement: All correct processes agree on the same values.
  • Integrity (the definition could vary): If all correct processes start with proposing the same value v, then eventually they should all agree on v.

1.1 - The Byzantine Problem Family

The Byzantine Generals problem proposed by Lamport et al. 3 in 1982 is one of the foundations of distributed consensus. It highlights the challenges of reaching consensus when there are processes exhibiting arbitrary behaviours. The original problem and its variations have been studied extensively and served as the foundations of many failure models and consensus algorithms.

The original paper 3 describes a scenario that a group of generals must reach a consensus decision to attack or retreat from a city. They can only communicate through messengers, and some of the generals may be traitors and thus sending false information to their peers to disrupt the decision-making process. The goal is to find an algorithm to ensure the loyal generals will reach a correct and integral decision despite the disruption of false information. The authors have proved that with unauthenticated message passing (i.e. oral messages through messenger), consensus can be achieved if and only if the number of traitors is less than one-third of the total number of generals. However, if authentication is applied (i.e. unforgeable messages), the problem is solvable regardless the number of traitors. The Byzantine Generals problem’s thought process can be easily applied to the distributed computing settings, where the usual way of communication is through a channel, and the presence of faulty processes that exhibit arbitrary behaviours can lead to destructive damages. While a process can utilize authentication approaches such as digital signatures to verify the integrity of the messages.

1.1.1 - Variations of the Byzantine Generals Problem

The Byzantine Generals problem has many variations. The Byzantine Generals/Broadcast and Byzantine Agreement/Consensus are two closely related types of Byzantine problems 5. The original paper 3 by Lamport et al. mostly discussed a broadcast-based problem - there is a commander that will broadcast a message to all nodes. Hence, the integrity definition here focus on all correct nodes agreeing on the value of the commander if the commander is correct. While in the Byzantine agreement/consensus problem, there is no special commander, so the integrity holds if and only if all correct nodes start with a value and end with the same value.

In addition, the 1983 paper 6 by Lamport discussed the Weak Byzantine Generals Problem. In this variation, all correct processes are allowed to bypass integrity requirement and agree upon on a different value with the general’s initial value, when some processes fail. This problem could be applied in the distributed transaction scenarios, where all the processes should abort if some of them fail, even if the initial command from the commander is to commit. The paper by Dolev el al. in 1986 7 introduced another variation - Byzantine Generals with approximate agreement, where the processes start with arbitrary real values and only need to agree on an approximate value. The problem could be applied in practice to achieve agreements that don’t require exact accuracy, such as synchronizing clocks or calibrating sensors.

There were many more variations developed to be applied to real-word scenarios, taking into account of different requirements, system interaction models, and failure models.

1.2 - The FLP Impossibility and System Synchrony

Distributed systems can be classified into two main types: synchronous systems and asynchronous systems 4. In a synchronous distributed system, there are bounds on process communication delay, drift of processes’ local clocks, and process execution time. While an asynchronous distributed system is the opposite and has no bounds on these three values.

Fischer et al. 8 presented an important theorem in 1985, known as the FLP impossibility proof. The work proved that it is impossible to reach consensus in an asynchronous distributed system. The authors demonstrated that even in reliable message channels and a non-Byzantine failure asynchronous system, a single process’s unpredictable stopping can cause any consensus algorithm to fail. Despite the fact that a perfect consensus algorithm doesn’t exist in asynchronous distributed systems, the author argued that the conditions under which the FLP impossibility result holds are quite strict and rarely encountered in practice. In most real-world distributed systems, there are certain assumptions that can be made regarding the system’s behaviour to refine the interaction model. Therefore, researchers began to investigate weaker synchrony assumptions that would still allow for the development of fault-tolerant consensus algorithms 9.

Notable researches include the 1987 paper by Dolev et al. 10 and the 1988 paper by Dwork et al. 11. In 10, the authors investigated 5 critical parameters including process synchrony, communication synchrony, message ordering synchrony, transmission type (broadcast/point-to-point), and the atomicity of message send/receive, by analysing their impact on consensus fault tolerance. They proved that consensus under failure is possible under several combinations of parameters. Building on the ideas from 10, another paper 11 formally introduced the concept of partial synchrony and reported the effect of varying degrees of partial synchrony on the fault tolerance of consensus algorithms. Both of these papers have laid theoretical foundations for future consensus algorithms and refinements of weaker synchrony guarantees.

Besides, Chandra and Toueg 12 have proposed a framework that focuses on discovering the usage of failure detectors to augment consensus under weaker synchrony assumptions. Building upon the works of 10 and 11, they investigated the roles of failure detectors when incorporating partial synchrony assumptions into the asynchronous model. By defining eight classes of failure detectors of different completeness and accuracy properties, the authors reported that consensus can be solved in weak synchronous systems using certain capable failure detectors.

For a more practical example, the paper by Castro and Liskov 13 in 1999 introduces a state machine replication algorithm know as pBFT for achieving the Byzantine fault tolerance (BFT). The authors indicated that previous algorithms suffered from the synchronous distributed system assumption, which might be vulnerable to various external attacks such as denial-of-service in the real-world. Instead, the authors circumvented the FLP impossibility theorem by making several weak synchrony assumptions. For example, the algorithm assumed that the message transmission delay won’t grow indefinitely, so that the liveness of the system can be guaranteed and clients eventually receive responses to their requests.

2 - State Machine Replication and Paxos

2.1 - State Machine Replication

Consensus algorithms have many applications, one of which is building a state machine replication. A state machine replication ensures that all nodes within a cluster maintain the same state and execute the same sequence of commands, thereby achieving replication. The earliest concept of state machine replication was proposed by Lamport 14 in 1978. Since then, such algorithms have been widely adopted in many domains to improve availability and fault-tolerance. For example, many databases provide a replication cluster, where the master node coordinates the replication process and ensures that all replica nodes stay synchronized. In this setup, the cluster can handle high concurrent reads by using load balancers to distribute incoming requests across multiple nodes. Furthermore, if the master node fails, one of the replica nodes can seamlessly take over as the new master.

2.2 - Paxos

One of the most renown state machine replication algorithm is the Paxos algorithm, which was proposed by Lamport 15 in 1998. The Paxos family of algorithms have been extensively modified and have long been the mainstream choice. Numerous revisions to the original Paxos have been proposed to address limitations and improve performance. One notable revision is Multi-Paxos 16 which extends the basic Paxos algorithm to allow for the agreement on a sequence of values rather than just a single one to reduce overhead. Another revision is Fast Paxos 17, which reduces the latency by allowing acceptors to accept values in a single round and without prepare phase.

3 - Raft In-depth

3.1 - The Invention of Raft

Although Paxos has been formally proved to be safe, an important issue with Paxos is its non-triviality to understand 18. There are many studies that have tried to simplify and explain Paxos in a more understandable format 19 20 21. The invention of Raft has opened another window in the field of consensus algorithm. Raft 22 was invented by Ongaro and Ousterhout in 2014 to serve as an understandable alternative to Paxos and has gained significant popularity since.

3.2 - How Raft Works

Raft breaks down the consensus problem into two sub-problems: leader election and log replication. Leader election aims to elect a node as the leader that is responsible for handling client requests, distributing log entries to other nodes, and deciding when to commit a log entry. Entries can only become persistent (saved in local storage) after being committed. Log replication describes how the leader supervises the process of distributing log entries and enforces consistency within the cluster. Raft, in its essential, is a state machine that every nodes transit between three states: Follower, Candidate, and Leader.

3.2.1 - Follower

Nodes will always start as followers. A follower has two tasks: listen to the leader’s coordination passively and start new elections to replace the current leader once it is down. The follower accepts new entries by receiving leader’s AppendEntries RPC , which contains the latest log entries (could be empty if there are no logs) and other relevant indicators. The information is used by the follower to check if there are new entries waiting for it to append, or its own entries are out of sync. The AppendEntries RPC also acts as a heartbeat, a followers will reset its timer if it hears a valid heartbeat. It will transition to a candidate if it times out.

3.2.2 - Candidate

When a node times out, it will transition to the candidate state and send out RequestVote RPC including assigned random timeout to every node, the recipient can respond positively or negatively based on the following criteria: the initiator’s term number (should be greater than the current term); whether it has received a vote request from another candidate; if the candidate’s log is up-to-date. A candidate will promote itself to leader, and send out AppendEntries RPC to every other node to announce its leadership once it gathers positive responses from majority nodes.

3.2.3 - Leader

The leader is responsible for handling client requests, if a non-leader node receives a client request, the node will ask the client to talk to the leader instead. The leader is also responsible for orchestrating the log replication. When the leader receives a new client request, it will append that entry to its log and distribute it to all the followers. If the majority of nodes own an entry, the leader will commit and tell others to commit up to this entry.

3.3 - Raft Revisions

Although Raft is a sophisticated algorithm, its application is limited in environments where security and efficiency are considered crucial.

Byzantine-fault-tolerance was not part of the consideration when designing Raft. In situations where mutual trust can not be assumed, adversarial or faulty nodes can compromise the entire Raft cluster either as followers or as leaders:

  • Problem 1: In Raft’s algorithm, any node can initiate an election, and a malicious node can easily lead the whole cluster into starvation by initiating new elections endlessly.
  • Problem 2: Only the leader can interact with clients. A Byzantine leader can forge client requests and deceive other nodes, or reply with false information to client.
  • Problem 3: The current leader will be demoted to a follower by receiving a RequestVote RPC with a higher term number. Without ways for nodes to verify whether an election was initiated with valid causes, malicious nodes can become leader by announcing incorrect term numbers.

Efficiency is another potential improvement area. As the number of nodes increases, the communication density and the chance of split vote will also increase during the election stage. In the worst-case scenario, the cluster will never be led by any leader due to constant re-elections. These problems may not seem serious with fewer nodes, but they will bottleneck the algorithm’s efficiency as the cluster scales up.

3.3.1 - Tangaroa: a Byzantine Fault Tolerant Raft

Designed by Copeland and Zhong 23, Tangaroa addresses Byzantine-fault-tolerance problems defined previously by leveraging digital signatures, unique identifiers, and hashing. A BFT Raft cluster that can tolerate \(f\) Byzantine nodes must have at least \(3f+1\) nodes, where each node owns the public key from client and every other node. The client also knows every node’s public key. Nodes and clients always sign their messages with their private keys during communication. The recipient will discard messages without a valid signature. Specifically, Tangaroa prevents Byzantine Faults by improving Raft’s two sub-components: leader election and log replication.

A leader election’s initiation process is identical to the original Raft algorithm, a follower starts an election, transitions to candidates, and sends out RequestVote RPCs. Modifications mainly focus on the recipient of the RequestVote RPCs. After receiving a vote request, the follower will not terminate its current term immediately but will continue to listen to heartbeats from the current leader and only respond to the vote request after the leader’s timeout. This is called Lazy Voting, and it prevents malicious nodes from starving the system by constantly starting new elections. After the recipient confirms the leader’s timeout, it will then respond to the vote request with its digital signature appended. The digital signature is used by the newly elected leader (after receiving enough votes) to send out heartbeat messages with the votes signed by all supporting peers to convince followers of its authority. If such a leader is then found to be behaving suspiciously, the client can trigger a new election by broadcasting UpdateLeader RPCs to all nodes.

Extra steps are involved in log replication to prevent a malicious leader from forging client requests. The client sends requests with a digital signature to guarantee authenticity and integrity, and a unique identifier to prevent faulty nodes from duplicating existing entries. When the leader receives a request, it will send out signed heartbeats and all votes granted to it during the election phase, to convince followers of its authority. After receiving a heartbeat, the follower will validate: (a) if a heartbeat is from a valid leader (verified by the signed votes), and (b) if the new entry is valid (verified by the signed request). Followers will only replicate entries that satisfy both. A technique called incremental hashing is used to verify whether new entries are valid, such hash value exists at every log entry and is computed with the hash value at the previous entry. To compute the incremental has at index \(i\), the node computes the incremental hash value at index \(i-1\) recursively. This means the recipient of a new entry can verify if the leader has any missing or faulty entries by simply comparing the incremental hash values.

3.3.2 - Improving Raft’s efficiency

There are a number of improvements to Raft’s efficiency, this section will provide a high-level overview of how some designs tackle the problem.

Designed by Kim et al. 24, a paper aims to address the network split problem (The election phase in Raft is also known as network split that cluster is considered down during election) in private blockchains where the network condition is unstable, resulting in a significant decrease in the blockchain’s throughput and performance. This paper addresses this problem by proposing an innovative method that uses federated learning to improve the leader election mechanism that considers the network stability when electing leaders and minimizes the chance of network splits.

Wang et al. 25 proposed a K-bucket-based Raft-like algorithm to maximize the efficiency of the leader election and consensus process, which will be impacted as the number of nodes increases. The proposed algorithm is called KRaft, integrated the Kademlia protocol and K-bucket into the raft algorithm. Kademlia protocol uses a XOR metric to measure the logical distance between any two nodes. K-bucket is a component of the routing table computed by every node, which contains information about every other nodes at varying distances.

Lastly, Want et al. 26 proposed a Raft algorithm designed specifically for highly adversarial and high real-time environments, which takes both Byzantine-fault-tolerance and efficiency into account. The algorithm, hhRaft tackles security problems by introducing monitor nodes that supervise the election and the log replication process, the monitor nodes also help reduce the possibility of network splits and election delays.

4 - Paxos and Raft: A Comparative Analysis

In this section, we will compare various aspects of Paxos and Raft. Both algorithms have many revisions. We will mainly use Paxos’ most popular implementation - Multi-Paxos to compare with the original Raft. We followed a study 27 to obtain a specific version of definition.

4.1 - Leader Election and Understandability

One of the most significant understandability difference between Raft and the Paxos family is how the election is conducted 27. In a state machine replication algorithm, there might be different nodes trying to compete to be the leader at the same time. It is important to guarantee Election Safety - at most one candidate can win the election during an election term. It is acceptable that no candidate wins the election due to a split vote or other issues and the decision is pushed back to subsequent elections. However, the occurrence of such repetitive elections should be reduced as much as possible to minimize downtime.

In Paxos, leader election is less trivial. To prevent a split vote, the candidate with the highest term number will become the leader. A server \(s\) among a cluster with \(n\) nodes can only be a candidate in a term if \(t\mod n=s\). Therefore, there can only be one candidate per term, ensuring the election safety. However, Paxos doesn’t enforce that the new leader will contain all the up-to-date logs once it starts the election. Instead, each node will include their log entries in their reply to the election messages, allowing the new leader to update its log on-the-fly.

In Raft, leader election is handled in a more understandable way. It uses a majority voting strategy to prevent a term from having more than one leader. Compared to Paxos, this strategy is more straightforward but might lead to the issue of repetitive split votes, causing the election to repeat over and over. Therefore, Raft asks each node to use a randomized timeout value to ensure that the possibility of multiple nodes starting an election simultaneously is low. Although this approach seems unstable intuitively, it has been proven to be surprisingly efficient 22 27. Additionally, another notable design of Raft is that it enforces a strong leadership and only allows a candidate with an up-to-date log to become the leader. While the leader will re-transmit missing logs to its followers later. This has simplified the message flow during election stage.

4.2 - Correctness and Fault-Tolerance

Both Raft and Paxos are similar in the assumptions they make and the environment they work under. They all assume an asynchronous and non-byzantine fault tolerance system. They can all guarantee safety if majority of nodes are online and survive less than half of the node failures. Besides, they both gave up some degree of liveness to circumvent the FLP impossibility theorem.

4.3 - Performance

The authors of Raft stated that Raft has approximately the same level of efficiency as (Multi-)Paxos. A study 28 showed that Raft and Paxos clusters take roughly the same amount of time to establish consensus under various cluster setups. Another study 29 demonstrated that Raft exhibited relatively stable consensus times across parameter changes, while Paxos was more sensitive to changes in network latency which result in the increase of consensus time. However, it is worth noting that there are only a few studies that focus on comparing the performance of Paxos and Raft, and the two examples we mentioned may not be conclusive. This could be one of the future research direction of consensus algorithm.

5 - Conclusion

In this paper, we have firstly introduced the foundations of consensus - the Byzantine Generals Problem and the FLP impossibility result. We then discussed two state machine replication algorithms - Paxos, and Raft. We have gone into depth in Raft to discuss how it works, how it can be improved in terms of security and performance, and how it differs from Paxos in leader elections.

References


  1. Bhardwaj, Prof. Rashmi, and Debabrata Datta. 2020. “Consensus Algorithm.” In, 91–107. https://doi.org/10.1007/978-3-030-38677-1_5↩︎

  2. Pease, M., R. Shostak, and L. Lamport. 1980. “Reaching Agreement in the Presence of Faults.” J. ACM 27 (2): 228–34. https://doi.org/10.1145/322186.322188↩︎

  3. Lamport, Leslie, Robert Shostak, and Marshall Pease. 1982. “The Byzantine Generals Problem.” ACM Trans. Program. Lang. Syst. 4 (3): 382–401. https://doi.org/10.1145/357172.357176↩︎ ↩︎ ↩︎ ↩︎

  4. Coulouris, George, Jean Dollimore, Tim Kindberg, and Gordon Blair. 2011. Distributed Systems: Concepts and Design. 5th ed. USA: Addison-Wesley Publishing Company. ↩︎ ↩︎

  5. Considine, Jeffrey, Matthias Fitzi, Matthew Franklin, Leonid A. Levin, Ueli Maurer, and David Metcalf. 2005. “Byzantine Agreement Given Partial Broadcast.” Journal of Cryptology 18 (3): 191–217. ↩︎

  6. Lamport, L. 1983. “The Weak Byzantine Generals Problem.” J. ACM 30 (3): 668–76. https://doi.org/10.1145/2402.322398↩︎

  7. Dolev, Danny, Nancy A. Lynch, Shlomit S. Pinter, Eugene W. Stark, and William E. Weihl. 1986. “Reaching Approximate Agreement in the Presence of Faults.” J. ACM 33 (3): 499–516. https://doi.org/10.1145/5925.5931↩︎

  8. Fischer, Michael J., Nancy A. Lynch, and Michael S. Paterson. 1985. “Impossibility of Distributed Consensus with One Faulty Process.” J. ACM 32 (2): 374–82. https://doi.org/10.1145/3149.214121↩︎

  9. Howard, Heidi. 2019. “Distributed Consensus Revised.” https://doi.org/10.17863/CAM.38840↩︎

  10. Dolev, Danny, Cynthia Dwork, and Larry Stockmeyer. 1987. “On the Minimal Synchronism Needed for Distributed Consensus.” J. ACM 34 (1): 77–97. https://doi.org/10.1145/7531.7533↩︎ ↩︎ ↩︎ ↩︎

  11. Dwork, Cynthia, Nancy Lynch, and Larry Stockmeyer. 1988. “Consensus in the Presence of Partial Synchrony.” J. ACM 35 (2): 288–323. https://doi.org/10.1145/42282.42283↩︎ ↩︎ ↩︎

  12. Chandra, Tushar Deepak, and Sam Toueg. 1996. “Unreliable Failure Detectors for Reliable Distributed Systems.” J. ACM 43 (2): 225–67. https://doi.org/10.1145/226643.226647↩︎

  13. Castro, Miguel, and Barbara Liskov. 1999. “Practical Byzantine Fault Tolerance.” In Proceedings of the Third Symposium on Operating Systems Design and Implementation, 173–86. OSDI ‘99. USA: USENIX Association. ↩︎

  14. Lamport, Leslie. 1978. “Time, Clocks, and the Ordering of Events in a Distributed System.” Commun. ACM 21 (7): 558–65. https://doi.org/10.1145/359545.359563↩︎

  15. Lamport, Leslie. 1998. “The Part-Time Parliament.” ACM Trans. Comput. Syst. 16 (2): 133–69. https://doi.org/10.1145/279227.279229↩︎

  16. Du, Hao, and David J. St. Hilaire. 2009. “Multi-Paxos : An Implementation and Evaluation.” In. https://api.semanticscholar.org/CorpusID:13002642↩︎

  17. Lamport, Leslie. 2006. “Fast Paxos.” Distributed Computing 19: 79–103. ↩︎

  18. Van Renesse, Robbert, and Deniz Altinbuken. 2015. “Paxos Made Moderately Complex” 47 (3). https://doi.org/10.1145/2673577↩︎

  19. Lamport, Leslie. 2001. “Paxos Made Simple.” Sigact News - SIGACT 32 (January). ↩︎

  20. Boichat, Romain, Partha Dutta, Svend Frølund, and Rachid Guerraoui. 2003. “Deconstructing Paxos.” SIGACT News 34 (1): 47–67. https://doi.org/10.1145/637437.637447↩︎

  21. Lampson, Butler. 2001. “The Abcd’s of Paxos.” In Proceedings of the Twentieth Annual Acm Symposium on Principles of Distributed Computing, 13. PODC ‘01. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/383962.383969↩︎

  22. Ongaro, Diego, and John Ousterhout. 2014. “In Search of an Understandable Consensus Algorithm.” In Proceedings of the 2014 Usenix Conference on Usenix Annual Technical Conference, 305–20. USENIX Atc'14. USA: USENIX Association. ↩︎ ↩︎

  23. Copeland, Christopher, and Hongxia Zhong. 2016. “Tangaroa: A Byzantine Fault Tolerant Raft.” Tech. rep. ↩︎

  24. Kim, Donghee, Inshil Doh, and Kijoon Chae. 2021. “Improved Raft Algorithm Exploiting Federated Learning for Private Blockchain Performance Enhancement.” In 2021 International Conference on Information Networking (Icoin), 828–32. https://doi.org/10.1109/ICOIN50884.2021.9333932↩︎

  25. Wang, Rihong, Lifeng Zhang, Quanqing Xu, and Hang Zhou. 2019. “K-Bucket Based Raft-Like Consensus Algorithm for Permissioned Blockchain.” In 2019 Ieee 25th International Conference on Parallel and Distributed Systems (Icpads), 996–99. https://doi.org/10.1109/ICPADS47876.2019.00152↩︎

  26. Wang, Yuchen, Shuang Li, Lei Xu, and Lizhen Xu. 2021. “Improved Raft Consensus Algorithm in High Real-Time and Highly Adversarial Environment.” In Web Information Systems and Applications, edited by Chunxiao Xing, Xiaoming Fu, Yong Zhang, Guigang Zhang, and Chaolemen Borjigin, 718–26. Cham: Springer International Publishing. ↩︎

  27. Howard, Heidi, and Richard Mortier. 2020. “Paxos Vs Raft: Have We Reached Consensus on Distributed Consensus?” In Proceedings of the 7th Workshop on Principles and Practice of Consistency for Distributed Data. PaPoC ‘20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3380787.3393681↩︎ ↩︎ ↩︎

  28. Van Dame, Kelsi Rado, Thomas Bronson Bergmann, Mohamed Aichouri, and Maria Pantoja. 2022. “A Comparative Study of Consensus Algorithms for Distributed Systems.” In High Performance Computing, edited by Isidoro Gitler, Carlos Jaime Barrios Hernández, and Esteban Meneses, 120–30. Cham: Springer International Publishing. ↩︎

  29. Hidayat, Siswandi Agung, Wahyu Juniardi, Ali Akbar Khatami, and Riri Fitri Sari. 2022. “Performance Comparison and Analysis of Paxos, Raft and Pbft Using Ns3.” In 2022 Ieee International Conference on Internet of Things and Intelligence Systems (Iotais), 304–10. https://doi.org/10.1109/IoTaIS56727.2022.9975938↩︎