Conversation
| attribute.Int64("chain_id", w.chainID), | ||
| statusAttrFromError(_err)), | ||
| ) | ||
| }(time.Now()) |
There was a problem hiding this comment.
This might have an issue where we will see large variance based on whether slow=true or false When slow=true, we wait for the tx to be committed in a block, whereas slow=false will allow for tx to enter mempool and that's it. We currently use slow=true for easier nonce management, but curious @stevenlanders 's thoughts on how we might want to consider this factor in this metric. Since in one case it will measure round trip time for tx send until fully executed and when slow=false its just time til mempool inclusion.
There was a problem hiding this comment.
good point, we could simply tag the metrics by slow to differentiate?
Expose the worker load metrics via OTEL and add prometheus exporter to CLI. The metrics exposed provides: * Send latency histogram * Receipt latency histogram * Worker queue length Note that the latency histograms can be used to measure the rate of requests per second among other throughput analysis. The metrics are tagged by chain ID, endpoint and worker ID to facilitate drilldown in results.
f1efcaf to
df1b795
Compare
stevenlanders
left a comment
There was a problem hiding this comment.
lgtm - i think it's okay not to worry about slow vs. not within the sender. This thing should be pretty dumb and just report what it sees.
Expose the worker load metrics via OTEL and add prometheus exporter to CLI. The metrics exposed provides:
Note that the latency histograms can be used to measure the rate of requests per second among other throughput analysis. The metrics are tagged by chain ID, endpoint and worker ID to facilitate drilldown in results.