-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloud 87 #211
base: main
Are you sure you want to change the base?
Cloud 87 #211
Conversation
"benchmark": { | ||
Description: "The configurations required to benchmark RonDB.", | ||
Type: schema.TypeList, | ||
Optional: true, | ||
Computed: true, | ||
ForceNew: true, | ||
ForceNew: true, // should not be necessary; works fine if benchmarking==false --> benchmarking==true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then it can be removed
also you would need to remove the ForceNew on almost all RonDB attributes to allow reconfiguration
/* | ||
Consider to run this with RonDB changes, since it will affect the RonDB version as well. | ||
Otherwise there is a risk that we might exchange all VMs for a new RonDB version (which means | ||
backup RonDB, and restore it, which are both lengthy processes) and then exchange all VMs again | ||
because we have new desired VM types. We could instead just have requested new VM types | ||
with the new cluster / RonDB version. | ||
|
||
This would also have to be changed in the UI though, so it is not an easy task. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't put this way tbh, cluster upgrade is an involved process and we shouldn't add yet another invariant to the procedure, also changing RonDB version is going to be an experimental feature for the time being and that shouldn't happen on the fly
@@ -1385,10 +1410,13 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int | |||
} | |||
} | |||
|
|||
// TODO: How will the user know that perhaps not all of their changes have been applied? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we never return before the upgrade/rollback procedure are complete using the resourceWaiting procedures (resourceClusterWaitForRunningAfterUpgrade
or resourceClusterWaitForStopping
)
return resourceClusterRead(ctx, d, meta) | ||
} | ||
|
||
if d.HasChange("head.0.instance_type") { | ||
// TODO: What about changes in disk_size & ha_enabled? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these attributes are immutable that shouldn't change after cluster creation at least for the time being
@@ -1397,38 +1425,19 @@ func resourceClusterUpdate(ctx context.Context, d *schema.ResourceData, meta int | |||
} | |||
} | |||
|
|||
if d.HasChange("rondb.0.data_nodes.0.instance_type") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should still allow offline upscaling, at least for the time being since reconfiguration is still experimental, and if no conflict with reconfiguration we should keep it and not remove it since we might have this use case
/* | ||
using d.GetChange() can be helpful to access single fields; | ||
might be easier to just let the backend handle the details | ||
here though | ||
*/ | ||
ronDBConfig, err := createAndValidateRonDB(d) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that would needs more tests to make sure we don't miss anything in here
err = api.ReconfigureRonDB(ctx, client, clusterId, *ronDBConfig) | ||
if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you would need to do the same as what we do for upgrade to wait until the reconfiguration process is complete (resourceClusterWaitForRunningAfterUpgrade
)
Make sure there is no duplicate PR for this issue
Please check if the PR meets the following requirements
https://hopsworks.atlassian.net/browse/CLOUD-87