Warning This module relates to NATS Streaming / STAN which has been deprecated and replaced by NATS JetStream. We do not yet have a benchmark driver for JetStream.
This folder houses the assets necessary to run benchmarks for NATS Streaming. In order to run these benchmarks, you'll need to:
- Create the necessary local artifacts
- Stand up a NATS cluster on Amazon Web Services (which includes a client host for running the benchmarks)
- SSH into the client host
- Run the benchmarks from the client host
In order to create the local artifacts necessary to run the NATS benchmarks in AWS, you'll need to have Maven installed. Once Maven's installed, you can create the necessary artifacts with a single Maven command:
$ git clone https://github.com/openmessaging/benchmark.git
% cd messaging-benchmark
$ mvn install
In order to create an NATS cluster on AWS, you'll need to have the following installed:
In addition, you will need to:
- Create an AWS account (or use an existing account)
- Install the
aws
CLI tool - Configure the
aws
CLI tool
Once those conditions are in place, you'll need to create an SSH public and private key at ~/.ssh/nats_streaming_aws
(private) and ~/.ssh/nats_streaming_aws.pub
(public), respectively.
$ ssh-keygen -f ~/.ssh/nats_streaming_aws
When prompted to enter a passphrase, simply hit Enter twice. Then, make sure that the keys have been created:
$ ls ~/.ssh/nats_streaming_aws*
With SSH keys in place, you can create the necessary AWS resources using a single Terraform command:
$ cd driver-nats-streaming/deploy
$ terraform init
$ terraform apply
That will install the following EC2 instances (plus some other resources, such as a Virtual Private Cloud (VPC)):
Resource | Description | Count |
---|---|---|
NATS instances | The VMs on which NATS brokers will run | 3 |
Client instance | The VM from which the benchmarking suite itself will be run | 4 |
Prometheus instance | The VM on which metrics services will be run | 1 |
When you run terraform apply
, you will be prompted to type yes
. Type yes
to continue with the installation or
anything else to quit.
Once the installation is complete, you will see a confirmation message listing the resources that have been installed.
There's a handful of configurable parameters related to the Terraform deployment that you can alter by modifying the
defaults in the terraform.tfvars
file.
Variable | Description | Default |
---|---|---|
region |
The AWS region in which the NATS cluster will be deployed | us-west-2 |
az |
The availability zone in which the NATS cluster will be deployed | us-west-2a |
public_key_path |
The path to the SSH public key that you've generated | ~/.ssh/rabbitmq_aws.pub |
ami |
The Amazon Machine Image (AWI) to be used by the cluster's machines | ami-9fa343e7 |
instance_types |
The EC2 instance types used by the various components | i3.4xlarge (NATS brokers), c4.8xlarge (benchmarking clients) |
If you modify the
public_key_path
, make sure that you point to the appropriate SSH key path when running the Ansible playbook.
With the appropriate infrastructure in place, you can install and start the RabbitMQ cluster using Ansible with just
one command. Note that a TFSTATE
environment must point to the folder in which the tf.state
file is located.
$ TF_STATE=. ansible-playbook \
--user ec2-user \
--inventory `which terraform-inventory` \
deploy.yaml
If you're using an SSH private key path different from
~/.ssh/nats_streaming_aws
, you can specify that path using the--private-key
flag, for example--private-key=~/.ssh/my_key
.
In the output produced by Terraform, there's a
client_ssh_host
variable that provides the IP address for the client EC2 host from which benchmarks can be run.
You can SSH into that host using this command:
$ ssh -i ~/.ssh/nats_streaming_aws ec2-user@$(terraform output client_ssh_host)
Once you've successfully SSHed into the client host, you can run all available benchmark workloads like this:
$ cd /opt/benchmark
$ sudo bin/benchmark --drivers driver-nats-streaming/nats-streaming.yaml workloads/*.yaml
You can also run specific workloads in the workloads
folder. Here's an example:
$ sudo bin/benchmark --drivers driver-nats-streaming/nats-streaming.yaml workloads/1-topic-1-partitions-1kb.yaml
The prometheus-nats-exporter
service is installed and Prometheus is installed on a standalone instance, along with
Node Exporter on all brokers to allow the collection of system metrics.
Prometheus exposes a public endpoint http://${prometheus_host}:9090
.
Grafana and a set of standard dashboards are installed alongside Prometheus. These are exposed on a public endpoint
http://${prometheus_host}:3000
. Credentials are admin
/admin
. Dashboards included: