FireFly Performance CLI is a HTTP load testing tool that generates a constant request rate against a FireFly network and measure performance. This is used to confirm confidence that FireFly can perform under normal conditions for an extended period of time.
- Broadcasts (
POST /messages/broadcasts) - Private Messaging (
POST /messages/private) - Mint Tokens (
POST /tokens/mint)- Fungible vs. Non-Fungible Token Toggle
- Blobs
- Contract Invocation (
POST /contracts/invoke)- Ethereum vs. Fabric
The ffperf CLI needs building before you can use it.
Run make install in the root directory to build and install the ffperf command.
The test configuration is structured around running ffperf as either a single process or in a distributed fashion as
multiple processes.
The tool has 2 basic modes of operation:
- Run against a local FireFly stack
- In this mode the
ffperftool loads information about the FireFly endpoint(s) to test by reading from a FireFlystack.jsonfile on the local system. The location of thestack.jsonfile is configured in theinstances.yamlfile by setting thestackJSONPathoption.
- In this mode the
- Run against a remote Firefly node
- In this mode the
ffperftool connects to a FireFly instance running on a different system. Since there won't be a FireFlystack.jsonon the system whereffperfis running the nodes to test must be configured in theinstances.yamlfile by settings theNodesoption.
- In this mode the
See the Getting Started guide for help running tests against a local stack.
In the test configuration you define one or more test instances for a single ffperf process to run. An instance then
describes running one or more test cases with a dedicated number of goroutine workers against a sender org and
a recipient org. The test configuration consumes a file reference to the stack JSON configuration produced by the
ff CLI (or can be defined manually) to understand the network topology, so that
sender's and recipient's just refer to indices within the stack.
As a result, running the CLI consists of providing an instances.yaml file describe the test configuration
and an instance index or name indicating which instance the process should run:
ffperf run -c /path/to/instances.yaml -i 0See example-instances.yaml for examples of how to define multiple instances
and multiple test cases per instance with all the various options.
See the Getting Started with Remote Nodes guide for help running tests against a remote FireFly node.
In the test configuration you define one or more test instances for a single ffperf process to run. An instance then
describes running one or more test cases with a dedicated number of goroutine workers. Instead of setting a sender org and
recipient org (because there is no local FireFly stack.json to read) the instance must be configured to use a Node that has
been defined in instances.yaml.
Currently the types of test that can be run against a remote node are limited to those that only invoke a single endpoint. This makes
it most suitable for test types token_mint, custom_ethereum_contract and custom_fabric_contract since these don't need
responses to be received from other members of the FireFly network.
To provide authentication when authenticating against a node endpoint, you can provide either of the following credentials in the instances.yaml under each node entry:
- bearer token - set the access token as the
authTokenvalue - basic auth - set the username and password as the
authUsernameandauthPasswordvalues
authTokentakes precedence overauthUsernameandauthPasswordvalues
As a result, running the CLI consists of providing an instances.yaml file describe the test configuration
and an instance index or name indicating which instance the process should run:
ffperf run -c /path/to/instances.yaml -i 0See example-remote-node-instances-fungible.yaml and example-remote-node-instances-nonfungible.yaml for examples of how to define nodes manually
and configure test instances to use them.
Executes a instance within a performance test suite to generate synthetic load across multiple FireFly nodes within a network
Usage:
ffperf run [flags]
Flags:
-c, --config string Path to performance config that describes the network and test instances
-d, --daemon Run in long-lived, daemon mode. Any provided test length is ignored.
--delinquent string Action to take when delinquent messages are detected. Valid options: [exit log] (default "exit")
-h, --help help for run
-i, --instance-idx int Index of the instance within performance config to run against the network (default -1)
-n, --instance-name string Instance within performance config to run against the network
)
The ffperf tool registers the following metrics for prometheus to consume:
- ffperf_runner_received_events_total
- ffperf_runner_incomplete_events_total
- ffperf_runner_sent_mints_total
- ffperf_runner_sent_mint_errors_total
- ffperf_running_mint_token_balance (gauge)
- ffperf_runner_deliquent_msgs_total
- ffperf_runner_perf_test_duration_seconds
The ffperf tool is designed to let you run various styles of test. The default behaviour for a local stack will exercise a local FireFly stack for 500 hours or until an error occurs. The prep.sh script will help you create and run this comprehensive test to validate a local installation of FireFly.
There are various options for creating your own customized tests. A full list of configuration options can be seen at conf.go but some useful options are outlined below:
- Setting a maximum number of test actions
- See
maxActionsattribute (defaults to0i.e. unlimited). - Once
maxActionstest actions (e.g. token mints) have taken place the test will shut down.
- See
- Ending the test when an error occurs
- See
delinquentActionattribute (defaults toexit). - A value of
exitcauses the test to end if an error occurs. Set tologto simply log the error and continue the test.
- See
- Set the maximum duration of the test
- See
lengthattribute. - Setting a test instance's
lengthattribute to a time duration (e.g.3h) will cause the test to run for that long or until an error occurs (seedelinquentAction). - Note this setting is ignored if the test is run in daemon mode (running the
ffperfcommand with-dor--daemon, or setting the globaldaemonvalue totruein theinstances.yamlfile). In daemon mode the test will run untilmaxActionshas been reached or an error has occurred anddelinquentActionsis set to true.
- See
- Ramping up the rate of test actions (e.g. token mints)
- See the
startRate,endRateandrateRampUpTimeattribute of a test instance. - All values default to
0which has the effect of not limiting the rate of the test. - The test will allow at most
startRateactions to happen per second. Over the period ofrateRampUpTimeseconds the allowed rate will increase linearly untilendRateactions per seconds are reached. At this point the test will continue atendRateactions per second until the test finishes. - If
startRateis the only value that is set, the test will run at that rate for the entire test.
- See the
- Waiting for mint transactions to be confirmed before doing the next one
- See
skipMintConfirmations(defaults tofalse). - When set to
trueeach worker routine will perform its action (e.g. minting a token) and wait for confirmation of that event before doing its next action.
- See
- Setting the features of a token being tested
- See
supportsDataandsupportsURIattributes of a test instance. supportsDatadefaults totruesince the sample token contract used by FireFly supports minting tokens with data. When set totruethe message included in the mint transaction will include the ID of the worker routine and used to correlate received confirmation events.supportsURIdefaults totruefor nonfungible tokens. This attribute is ignored for fungible token tests. If set totruethe ID of a worker routine will be set in the URI and used to correlate received confirmation events.- If neither attribute is set to true any received confirmation events cannot be correlated with mint transactions. In this case the test behaves as if
skipMintConfirmationsis set totrue.
- See
- Waiting at the end of the test for the minted token balance of the
mintRecipientaddress to equal the expected value. Since a test might be run several times with the same address the test gets the balance at the beginning of the test, and then again at the end. The difference is expected to equal the value ofmaxActions. To enable this check set themaxTokenBalanceWaittoken option the length of time to wait for the balance to be reached. IfmaxTokenBalanceWaitis not set the test will not check balances. - Having a worker loop submit more than 1 action per loop by setting
actionsPerLoopfor the test. This can be helpful when you want to scale the number of actions done in parallel without having to scale the number of workers. The default value is1for this attribute. If setting to a value >1it is recommended to haveskipMintConfirmationsto setfalse.
See the ffperf Helm chart for running multiple instances of ffperf using Kubernetes.