Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Update math in PQ size example #16411

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 11 additions & 8 deletions docs/static/persistent-queues.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,14 @@ Required Queue Capacity = (Bytes Received Per Hour * Tolerated Hours of Downtime
------
<1> To start, you can set the `Multiplication Factor` to `1.10`, and then refine it for specific data types as indicated in the tables below.

*Example*

Let's consider a {ls} instance that receives 1000 EPS and each event is 1KB (1024 bytes),
which is 1000 KB per second or 3.5GB (1*1000*3600 KB) every hour.
In order to tolerate a downstream component being unavailable
for 12h without {ls} exerting back-pressure upstream, the persistent queue's
`max_bytes` would have to be set to 3.5*12*1.10 = 46.2GB, or about 50GB.

[[sizing-by-type]]
====== Queue size by data type

Expand All @@ -115,7 +123,7 @@ These tables show examples of overhead by event type and how that affects the mu
[cols="<h,<,<m,<m,<m",options="header",]
|=======================================================================
| Plaintext size (bytes) | Serialized {ls} event size (bytes) | Overhead (bytes) | Overhead (%) | Multiplication Factor
| 11 | 213 | 202 | 1836% | 19.4
| 11 | 213 | 202 | 1836% | 18.36
| 1212 | 1416 | 204 | 17% | 1.17
| 10240 | 10452 | 212 | 2% | 1.02
|=======================================================================
Expand All @@ -126,16 +134,11 @@ These tables show examples of overhead by event type and how that affects the mu
| JSON document size (bytes) | Serialized {ls} event size (bytes) | Overhead (bytes) | Overhead (%) | Multiplication Factor
| 947 | 1133 | 186 | 20% | 1.20
| 2707 | 3206 | 499 | 18% | 1.18
| 6751 | 7388 | 637 | 9% | 1.9
| 58901 | 59693 | 792 | 1% | 1.1
| 6751 | 7388 | 637 | 9% | 1.09
| 58901 | 59693 | 792 | 1% | 1.01
|=======================================================================

*Example*

Let's consider a {ls} instance that receives 1000 EPS and each event is 1KB,
or 3.5GB every hour. In order to tolerate a downstream component being unavailable
for 12h without {ls} exerting back-pressure upstream, the persistent queue's
`max_bytes` would have to be set to 3.6*12*1.10 = 47.25GB, or about 50GB.

[[pq-lower-max_bytes]]
===== Smaller queue size
Expand Down
Loading