[Helm] Chart Component Configuration Isolation#2472
Open
hemanthsavasere wants to merge 2 commits intoapache:mainfrom
Open
[Helm] Chart Component Configuration Isolation#2472hemanthsavasere wants to merge 2 commits intoapache:mainfrom
hemanthsavasere wants to merge 2 commits intoapache:mainfrom
Conversation
Contributor
Author
|
Hi @swuferhong, can you please review the pull request. Thanks |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Linked issue: close #2191
This PR introduces component-specific configuration sections for the Fluss Helm chart, enabling users to independently configure replicas and health probes for coordinator and tablet servers.
###Previously,
.Values.resources.tabletServerinstead of.Values.resources.coordinatorServerChanges
Added two new top-level configuration sections:
Also updated the
resourcessection to use explicit empty defaults instead of being commented out:replicas: 1with{{ .Values.coordinatorServer.replicas }}.Values.coordinatorServer.livenessProbe.*references.Values.coordinatorServer.readinessProbe.*references.Values.resources.tabletServerto.Values.resources.coordinatorServerreplicas: 3with{{ .Values.tabletServer.replicas }}.Values.tabletServer.livenessProbe.*references.Values.tabletServer.readinessProbe.*referencesTests
Below tests are successful
Deploy with different replica counts per component - verified by deploying with different replica counts per component
Configure different probe settings per component - verified each component has independent probe configuration.
Resource bug fix (coordinator uses correct resources) - bug fix verified. Coordinator now correctly uses resources.coordinatorServer.
Pod health and startup - all components started successfully, registered with ZooKeeper, and coordinator detected both tablet servers.
Replica Scaling - verified the tablet servers scaled independently
Independent Resource Modification - verified each component updated with different resources independently
Documentation