-
Notifications
You must be signed in to change notification settings - Fork 5
mgd: cleanup routes from deleted bgp peer #360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
4c4a202 to
18355eb
Compare
|
I am running a4x2 with:
I ran a slightly modified version of the test flow in my comments in #349 Test run 1: Initial state on g3 (serves requests from Oxide CLI) before bgp peer delete (already set local-pref to 50): Initial state reported by CLI output before bgp peer delete: Delete the bgp peer. Resulting state on g3 after bgp peer delete (presence of Resulting state reported by CLI output after bgp peer delete: Now add the BGP peer back without local-pref. Resulting state on g3 after bgp peer set (looks good, doesn't have the leftover route anymore): Resulting state reported by CLI output after bgp peer set: |
|
Continuation of previous comment. Test run 2. Initial state on g3 (serves requests from Oxide CLI) before bgp peer delete (already set local-pref to 50): Initial state reported by CLI output before bgp peer delete: Delete the bgp peer. Resulting state on g3 after bgp peer delete (this time it looks ok, no leftover routes): Resulting state reported by CLI output after bgp peer delete: Now add the BGP peer back without local-pref. Resulting state on g3 after bgp peer set (looks good): Resulting state reported by CLI output after bgp peer set: |
|
Thanks for running this through your testing @elaine-oxide! If you're satisfied with the results of test #2, then so am I. @rcgoodfellow this is ready for review |
|
Continuation of my previous comment. Test run 3 Note: the purpose of this third run is for sanity, to make sure we can ignore the fact that I saw leftover routes in Test run 1.
So, we can safely ignore the leftover routes observed in Test run 1, since if I had run the second command a second (or third) time, I would get the same results as in Test run 3. Initial state on g3 (serves requests from Oxide CLI) before bgp peer delete (already set local-pref to 50): Initial state reported by CLI output before bgp peer delete: Delete the bgp peer. Resulting state on g3 after bgp peer delete (ran these commands very soon after the above command, the new information is propagated after a second or two): Resulting state reported by CLI output after bgp peer delete: Now add the BGP peer back without local-pref. Resulting state on g3 after bgp peer set (ran these commands very soon after the above command, the new information is propagated after a second or two): Resulting state reported by CLI output after bgp peer set: |
Fixes: #349 Signed-off-by: Trey Aspelund <trey@oxidecomputer.com>
18355eb to
549492a
Compare
rcgoodfellow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This is kind of gross. Omicron currently is out of sync with maghemite, see #6693 and the two maghemite pushes that require synchronization: oxidecomputer/maghemite#359 and oxidecomputer/maghemite#360. I'd like to make forward progress with the hyper v1 migration, and that's blocked on updating the omicron dependency of maghemite which pulls in old hyper (reqwest, progenitor, etc). This gets pull into other repos via omicron-common. It's worth noting that this circular arrangement seems lousy. The "vassal crates" (crucible, propolis, maghemite, (and dendrite to a lesser degree)) depend on omicron-common, but omicron-common pulls in maghemite's `mg-admin-client` which in turn pulls in... lots... including progenitor, hyper, reqwest, etc. My goal here is to have a temporary dependency on a branch of maghemite (oxidecomputer/maghemite#378); once #6693 is integrated, we can then pin the dependency on maghemite's HEAD rev.
Fixes: #349