-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hotfix(mashape-analytics) improve ALF buffer under load #757
Merged
Merged
Changes from 1 commit
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
87e15dd
hotfix(mashape-analytics) improve ALF buffer under load
thibaultcha bd3035e
specs(mashape-analytics) unit test for sending_queue_size
thibaultcha 83e9034
refactor(mashape-analytics) some polishing and renamings
thibaultcha 5c26056
chore(mashape-analytics) bump agent to 1.1.0
thibaultcha 3cafb45
perf(alf_serializer) cache global functions and version bump
thibaultcha File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,12 +1,13 @@ | ||
return { | ||
fields = { | ||
service_token = { type = "string", required = true }, | ||
environment = { type = "string" }, | ||
batch_size = { type = "number", default = 100 }, | ||
log_body = { type = "boolean", default = false }, | ||
delay = { type = "number", default = 2 }, | ||
host = { required = true, type = "string", default = "socket.analytics.mashape.com" }, | ||
port = { required = true, type = "number", default = 80 }, | ||
path = { required = true, type = "string", default = "/1.0.0/batch" } | ||
service_token = {type = "string", required = true}, | ||
environment = {type = "string"}, | ||
batch_size = {type = "number", default = 100}, | ||
log_body = {type = "boolean", default = false}, | ||
delay = {type = "number", default = 2}, | ||
sending_queue_size = {type = "number", default = 10}, -- in mb | ||
host = {required = true, type = "string", default = "socket.analytics.mashape.com"}, | ||
port = {required = true, type = "number", default = 80}, | ||
path = {required = true, type = "string", default = "/1.0.0/batch"} | ||
} | ||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change from 1MB to 500MB correct? Seems awfully large. This is the maximum payload size that the socket server will accept in a single request, right? Given that
max_queue_size
defaults to 10MB, and the queue holds multiple batches, I would expect this value to be < 10MB.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This value is correct. This constant will be renamed, and its actual purpose is to mirror a constant present in the Galileo collector. It is the maximum payload size accepted by an HTTP request by the Galileo collector. It used to be 1MB, and is now 500. I believe this value should mirror the Galileo restrictions in the case someone really knows what he is doing and wish to send extremely large batches of ALFs (possibly containing request bodies that are very large). This value is hard coded and cannot be bypassed. Any ALF beyond that limit will be dropped as it is too big by itself, and if a batch reaches that limit, it will be put in the sending_queue directly, wether it has reached the configured
batch_size
or not.batch_size
(n of ALFs) andsending_queue_size
(Mbs) are the value one should tweak to enforce a low memory footprint. I could see a third value, being thebatch_size
but in terms of bytes too, being useful. In case one's ALFs are much much bigger than he intended to. It would make the plugin more complicated to understand and configure (it already does with thesending_queue_size
so I am not sure about that yet.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @cb372 yes it's very large, but it isn't meant as a way to reject very large batches, it's mostly just a failsafe in case a client gets stuck and forgets to close the tcp socket or something like that. The collector handles it just fine even with extremely large batches. If it reaches a size that big without being a bad socket, it's often due to huge bodies and we don't want to drop those.