This repository has been archived by the owner on May 16, 2023. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fact that this increase is causing issues for our testing environment might be a sign that this should be announced as a breaking chance in the release notes. At least breaking for anybody using the current default resource limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our testing environment we are deploying 21 Elasticsearch pods (7 scenarios x 3 nodes) per clusters and we can assume that the pod are never requiring more resources than set in requests resources during the tests.
This change cause resource needed for Elasticsearch pods grow from 2.1 CPU to 21 CPU in our testing environment.
IMHO standard usage is closer to 3 long running Elasticsearch pods per cluster which are consuming CPU and memory limits, so I'm not sure that the impact should be considered as a breaking change. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The impact was big enough to cause us issues in our testing environment because of resource capacity. A 10x increase in the amount of CPUs requested by default could also cause issues for others too. For clusters with autoscaling enabled they might suddenly end up with 10 times the amount of Kubernetes node running after a version bump.
I don't think the change is big enough to avoid doing it, but I do think it would be nice to give all users a heads up including documentation for how to keep the existing configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds good to me, I'll merge the PR and will add a note in next release's changelog about this change and how to keep existing config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Crazybus notice added in 293f0ec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect thank you!