Paper Review: RingBFT: Resilient Consensus over Sharded Ring Topology
This post is a summary of RingBFT paper.
There are two types of sharding systems in Blockchain that are based on the behavior of transactions called single-shard and cross-shard. Single-shard means the request will modify the state of only replicas in a single shard (a group of replicas in the same geographical location). Cross-shard means the request will modify the states for replicas in more than one shard (a group of replicas in different geographical locations).
The Figure below demonstrates the blockchain that has no sharding design.
The Figure below demonstrates the single-shard blockchain where the transaction is handled by only a single shard.
The Figure below demonstrates the cross-shard blockchain where the transaction is handled by more than one shard.
As you can see, the designs have some differences. These differences affect/improve the system performance. For instance, in single-shard, it has been known that the system performs well and has a huge body of research.
The cross-sharding system is still ongoing challenge due to the need of communications between replicas in different shards. For that, RingBFT paper proposed a solution for cross-shard transactions to improve system performance.
RingBFT is built based on ring topology. RingBFT assumes that the PBFT protocol is running in each shard. Each shard has an identifier(id). The client knows what its request is going to modify, for example the request C modifies shards A, B, and C.
In the case of a single-shard, the client request(C) is submitted to shard A. In the normal path, the primary receives the request, and runs PBFT with all replicas in Shard A. Finally, it replies back to client. Done!
In case of failure in a single shard, the client request(C) that is sent to primary in Shard A. The client starts its timeout, but the timeout expires before the client gets the reply. The client then broadcasts to all replicas. Upon receiving the request from the client, each replica forwards the C to the primary. If replicas find that the primary is Byzantine, then the replicas trigger view change protocol to replace the primary in order to commit C.
In case of a cross-shard where the client request(C) is going to modify A,B, and D, the C submits to shard A that has the lowest id. In the normal path, the primary receives the request, and runs PBFT with all replicas in Shard A. Then, each replica in A forwards the result to the next shard(B). Notice, every replica in every shard has the same id in another shard, for instance, Shard A(1,2,3) and Shard B(1,2,3). These replicas communicate with each other if A_id == B_id.
Upon receiving the forward messages in shard B, each replica of B will broadcast forward the messages to all replicas of B. After receiving the first prepare message, each replica starts a timeout for remote view-change. If every node receives f+1 prepare messages, then it will stop remote view-change timeout. Otherwise, each replica of B will send to each replica of A that has the same id. If replicas in A receives f+1 messages from replicas in B, the replicas in A will do local view change and forward the messages again.
Finally, the last shard D, will forward the messages to A which is considered the primary shard for the client’s transaction and A will reply to the client. If every shard execution depends on the previous shard, we need another rotation on the ring to carry dependencies and to execute locally.
To illustrate timing assumptions, we have a local timer, a transmit timer, and a remote timer. Every timer handles a case for ensuring the liveness/the continuation of the protocol. The local timer is for replacing the primary in a single shard. The transmit timer is to make sure the next shard receive the previous shard forwarded messages. The remote timer is to ensure that enough nodes in previous shard is forwarded the messages to the next shard.
For protocol correctness: I will mention the key points for keeping things simple. The proposition 6.1 says that no non-faulty replica in a shard will commit a different transaction at position K with assumption n > 3f. Deadlock means that two shards are waiting for each other. RingBFT says if there are two shards S and U, and two conflict transactions T1,T2. Then, these T1,T2 will not be on a different order across the shards S and U. Safety, RingBFT will guarantee the same order inside the shard and across the shard. Liveness is guaranteed under all time assumptions that has been mentioned earlier in the post. Notice that, every shard will have 3f+1 where f is byzantine nodes.
RingBFT was evaluated on RESILIENTDB framework that deployed on fifteen regions across five continents. In the below figure, the parameters are number of shards is up to 15, number of replicas per shard is up to 28 replicas, varying percentage of cross-shard transactions is up to 100%, batch size is up to 5k, and number of clients is up to 20k.