This repo provides a minimal implementation of the s2PAC consensus protocol. The codebase has been designed to be small, efficient, and easy to benchmark and modify. It has not been designed to run in production but uses real cryptography (kyber), networking(native), and storage (nutsdb).
Say something about s2PAC...
Wukong is written in Golang, but all benchmarking scripts are written in Python and run with Fabric. To deploy and benchmark a testbed of 4 nodes on your local machine, clone the repo and install the python dependencies:
git clone https://github.com/ac-dcz/s2pac
cd s2pac/benchmark
pip install -r requirements.txt
You also need to install tmux (which runs all nodes and clients in the background). Finally, run a local benchmark using fabric:
fab local
This command may take a long time the first time you run it (compiling golang code in release mode may be slow) and you can customize a number of benchmark parameters in fabfile.py. When the benchmark terminates, it displays a summary of the execution similarly to the one below.
- s2PAC_Lean
-----------------------------------------
SUMMARY:
-----------------------------------------
+ CONFIG:
Protocol: 2pac_lean
DDOS attack: False
Committee size: 4 nodes
Input rate: 3,200 tx/s
Transaction size: 250 B
Batch size: 200 tx/Batch
Faults: 0 nodes
Execution time: 30 s
+ RESULTS:
Consensus TPS: 3,062 tx/s
Consensus latency: 167 ms
End-to-end TPS: 3,057 tx/s
End-to-end latency: 221 ms
- s2PAC_Big
-----------------------------------------
SUMMARY:
-----------------------------------------
+ CONFIG:
Protocol: 2pac_big
DDOS attack: False
Committee size: 4 nodes
Input rate: 3,200 tx/s
Transaction size: 250 B
Batch size: 200 tx/Batch
Faults: 0 nodes
Execution time: 30 s
+ RESULTS:
Consensus TPS: 3,179 tx/s
Consensus latency: 155 ms
End-to-end TPS: 3,175 tx/s
End-to-end latency: 207 ms
-----------------------------------------
- s2PAC-Big-DAG
-----------------------------------------
SUMMARY:
-----------------------------------------
+ CONFIG:
Protocol: 2pac_big_dag
DDOS attack: False
Committee size: 4 nodes
Input rate: 3,200 tx/s
Transaction size: 250 B
Batch size: 200 tx/Batch
Faults: 0 nodes
Execution time: 30 s
+ RESULTS:
Consensus TPS: 11,458 tx/s
Consensus latency: 233 ms
End-to-end TPS: 11,444 tx/s
End-to-end latency: 283 ms
-----------------------------------------
The following steps will explain that how to run benchmarks on Alibaba cloud across multiple data centers (WAN).
1. Set up your Aliyun credentials
Set up your Aliyun credentials to enable programmatic access to your account from your local machine. These credentials will authorize your machine to create, delete, and edit instances on your Aliyun account programmatically. First of all, find your 'access key id' and 'secret access key'. Then, create a file ~/.aliyun/access.json
with the following content:
{
"AccessKey ID": "your accessKey ID",
"AccessKey Secret": "your accessKey Secret"
}
2. Add your SSH public key to your Aliyun account
You must now add your SSH public key to your Aliyun account. This operation is manual and needs to be repeated for each Aliyun region that you plan to use. Upon importing your key, Aliyun requires you to choose a 'name' for your key; ensure you set the same name on all Aliyun regions. This SSH key will be used by the python scripts to execute commands and upload/download files to your Aliyun instances. If you don't have an SSH key, you can create one using ssh-keygen:
ssh-keygen -f ~/.ssh/Aliyun
3. Configure the testbed
The file settings.json located in mempool-2pac/benchmark contains all the configuration parameters of the testbed to deploy. Its content looks as follows:
{
"key": {
"name": "wuKong",
"path": "/root/.ssh/id_rsa",
"accesskey": "/root/.aliyun/access.json"
},
"ports": {
"consensus": 8000
},
"instances": {
"type": "ecs.g6e.xlarge",
"regions": [
"eu-central-1",
"ap-northeast-2",
"ap-southeast-1",
"us-east-1"
]
}
}
The first block (key
) contains information regarding your SSH key and Access Key:
"key": {
"name": "wuKong",
"path": "/root/.ssh/id_rsa",
"accesskey": "/root/.aliyun/access.json"
}
The second block (ports
) specifies the TCP ports to use:
"ports": {
"consensus": 8000
}
The the last block (instances
) specifies theAliyun Instance Typeand the Aliyun regions to use:
"instances": {
"type": "ecs.g6e.xlarge",
"regions": [
"eu-central-1",
"ap-northeast-2",
"ap-southeast-1",
"us-east-1"
]
}
4. Create a testbed
The Aliyun instances are orchestrated with Fabric from the file fabfile.py (located in WuKong/benchmark) you can list all possible commands as follows:
The command fab create
creates new Aliyun instances; open fabfile.py and locate the create
task:
@task
def create(ctx, nodes=2):
...
The parameter nodes
determines how many instances to create in each Aliyun region. That is, if you specified 4 Aliyun regions as in the example of step 3, setting nodes=2
will creates a total of 8 machines:
$ fab create
Creating 8 instances |██████████████████████████████| 100.0%
Waiting for all instances to boot...
Successfully created 8 new instances
You can then install goland on the remote instances with fab install
:
$ fab install
Installing rust and cloning the repo...
Initialized testbed of 10 nodes
Next,you should upload the executable file
$ fab uploadexec
5. Run a benchmark
$ fab remote