Skip to content
This repository has been archived by the owner on May 16, 2023. It is now read-only.

[elasticsearch] set cpu request = cpu limit #458

Merged
merged 2 commits into from
Feb 5, 2020
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
[meta] use custom vm with more cpu
By increasing elasticsearch cpu request in c566822, n1-standard-8 don't provide enough cpu for the tests
  • Loading branch information
jmlrt committed Feb 3, 2020
commit 8ff6eaeff5f16f80dcb442e8cd1b1065991a68bc
2 changes: 1 addition & 1 deletion helpers/terraform/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ variable "additional_zones" {

variable "machine_type" {
description = "Machine type for the kubernetes nodes"
default = "n1-standard-8"
default = "custom-10-30720"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fact that this increase is causing issues for our testing environment might be a sign that this should be announced as a breaking chance in the release notes. At least breaking for anybody using the current default resource limits.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our testing environment we are deploying 21 Elasticsearch pods (7 scenarios x 3 nodes) per clusters and we can assume that the pod are never requiring more resources than set in requests resources during the tests.

This change cause resource needed for Elasticsearch pods grow from 2.1 CPU to 21 CPU in our testing environment.

IMHO standard usage is closer to 3 long running Elasticsearch pods per cluster which are consuming CPU and memory limits, so I'm not sure that the impact should be considered as a breaking change. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The impact was big enough to cause us issues in our testing environment because of resource capacity. A 10x increase in the amount of CPUs requested by default could also cause issues for others too. For clusters with autoscaling enabled they might suddenly end up with 10 times the amount of Kubernetes node running after a version bump.

I don't think the change is big enough to avoid doing it, but I do think it would be nice to give all users a heads up including documentation for how to keep the existing configuration.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds good to me, I'll merge the PR and will add a note in next release's changelog about this change and how to keep existing config.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Crazybus notice added in 293f0ec

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect thank you!

}

variable "network" {
Expand Down