-
Couldn't load subscription status.
- Fork 42
✨ Add EMLB deletion #771
✨ Add EMLB deletion #771
Conversation
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cprivitere The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
internal/emlb/emlb.go
Outdated
| return err | ||
| } | ||
|
|
||
| resp, err := e.deleteListenerPort(ctx, lbPort.GetId()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, the LB ports are automatically deleted when the LB is deleted. The only time we need to explicitly delete the port is if we're keeping the LB and need to change which ports are open.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed. Going to remove this.
internal/emlb/emlb.go
Outdated
| // TODO: Validate pool is empty | ||
|
|
||
| log.Info("Deleting EMLB Pool", "Cluster Metro", e.metro, "Cluster Name", clusterName, "Project ID", e.projectID, "Pool ID", lbPoolID) | ||
|
|
||
| resp, err = e.deletePool(ctx, lbPool.GetId()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with where this fits in to the overall cluster lifecycle. Is there any risk here that cluster-api could empty and delete the pool because we're making some kind of cluster change (recreating control plane nodes or something). Even if there is that risk, does it really matter, or would cluster-api just create a new pool when a new control plane node came up?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is called on node deletion. We have a pool per node so it's no problem, currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, OK. If there's a pool per node then I think this is similar to the LB deletion, and deleting the pool will automatically delete the origins inside the pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm this works here too and will delete the origin deleting parts. My original idea was to delete all the components explicitly so that if things start failing, as little will be deleted as possible, but given the one to one nature, I guess that doesn't make much sense.
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Signed-off-by: Chris Privitere <23177737+cprivitere@users.noreply.github.com>
Co-authored-by: Marques Johansson <mjohansson@equinix.com>
|
/lgtm |
Signed-off-by: Chris Privitere 23177737+cprivitere@users.noreply.github.com
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)format, will close the issue(s) when PR gets merged):Fixes #