[Suggestion] Tuning stream read-ahead #14877
              
                Unanswered
              
          
                  
                    
                      the-mikedavis
                    
                  
                
                  asked this question in
                Ideas
              
            Replies: 1 comment
-
| 
         Yeah, we already recommend Linux readahead tuning in the Quorum Queues guide, and the whole point there is that there may be times to increase this value, such as the environments that use SSDs.  | 
  
Beta Was this translation helpful? Give feedback.
                  
                    0 replies
                  
                
            
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
RabbitMQ series
4.2.x
Operating system (distribution) used
Linux
How is RabbitMQ deployed?
Other
What would you like to suggest for a future version of RabbitMQ?
The read ahead improvement mentioned in the recent Delivery Optimization for RabbitMQ Streams improves consumer throughput when stream data ends up in smaller chunks (i.e. it was published at a lower throughput). The constant for the buffer size of the read-ahead is currently fixed at 4096 bytes. I wonder if we should make this configurable and/or have it tune dynamically based on the size of the chunks the reader encounters, or increase the default?
Consider a simple scenario like this:
make run-broker TEST_TMPDIR=/path/to/a/real/diskstream-perf-test --producers 1 --consumers 1 --size 1000 --rate 300000The producer and consumer keep up with one another and overall latency is consistent and low.
Instead if we use a lower fixed throughput and also enable TLS for the streaming endpoint:
rabbitmq.confin the root of the server repo:make run-broker TEST_TMPDIR=/path/to/a/real/disk RABBITMQ_CONFIG_FILE=rabbitmq.confThis scenario enters a sweet-spot (at least on my machine - you may need to tune
--ratehigher or lower to reproduce) where the consumption rate is consistently slower than the publish rate and so latency gradually rises. Eventually the consumer can fall behind by so much that its position is collected by retention. This doesn't seem to happen with unencrypted connections so I assume that what's happening here is that we're drowning in syscalls from reading then and writing these smaller chunks to the socket, and also there is encryption overhead.Tuning the read-ahead constant up to 32768 bytes lets the consumer keep up with the producer. So I wonder if this read-ahead limit constant should be configurable so that it can be set higher when reading to TLS? Or can we tune the current value to something reasonable that works to avoid this sweet spot?
Beta Was this translation helpful? Give feedback.
All reactions