Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nodeadm: block until daemon status changes are reflected #1965

Merged
merged 4 commits into from
Sep 17, 2024

Conversation

ndbaker1
Copy link
Member

@ndbaker1 ndbaker1 commented Sep 15, 2024

Issue #, if available:

more general attempt than #1942 at solving the related issue

Description of changes:

revamp the retrier helper a bit and block on daemon EnsureRunning impls to guarantee the service is up before continuing with the rest of the bootstrap. This makes more sense for an EnsureRunning api, which only returns once the process is currently running, rather than just initiated.

  • fixed a bug in how GetDaemonStatus was implemented for systemd. The current implementation was using the string value of a pointer rather than the value of the systemd property.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Testing Done

built a dev ami and verified completion of the bootstrap. also logged the GetDaemonStatus calls on a dev ami, and recorded 10 second transition from activiting to active for both containerd and kubelet.

This increases the total time-to-node-join metric, but would reduce potential retries in other parts of the bootstrap where connectivity to containerd or etc is required.

nodeadm test logs
[root@ip-172-31-15-250 ~]# journalctl -u nodeadm-run
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal systemd[1]: Starting nodeadm-run.service - EKS Nodeadm Run...
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.7596016,"caller":"init/init.go:54","msg":"Checking user is root.."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.7599344,"caller":"init/init.go:62","msg":"Loading configuration..","configSource":"imds://user-data"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.7624507,"caller":"init/init.go:71","msg":"Loaded configuration","config":{"metadata":{"creationTimestamp":null},"spec":{"cluster":{"name":"my-cluster","apiServerEndpoint":">
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.7627125,"caller":"init/init.go:73","msg":"Enriching configuration.."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.762764,"caller":"init/init.go:153","msg":"Fetching instance details.."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: SDK 2024/09/16 20:07:51 DEBUG attempting waiter request, attempt count: 1
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9515529,"caller":"init/init.go:166","msg":"Instance details populated","details":{"id":"i-0eac6143cb37792cb","region":"us-west-2","type":"m1.small","availabilityZone":"us-w>
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9515986,"caller":"init/init.go:167","msg":"Fetching default options..."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9525166,"caller":"init/init.go:175","msg":"Default options populated","defaults":{"sandboxImage":"602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.5"}}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9526398,"caller":"init/init.go:78","msg":"Validating configuration.."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9526534,"caller":"init/init.go:83","msg":"Creating daemon manager.."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9537528,"caller":"init/init.go:117","msg":"Setting up system aspects..."}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9538164,"caller":"init/init.go:120","msg":"Setting up system aspect..","name":"local-disk"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.95388,"caller":"system/local_disk.go:26","msg":"Not configuring local disks!"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9539227,"caller":"init/init.go:124","msg":"Set up system aspect","name":"local-disk"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9539704,"caller":"init/init.go:120","msg":"Setting up system aspect..","name":"networking"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9540231,"caller":"system/networking.go:79","msg":"writing eks_primary_eni_only network configuration"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.965532,"caller":"init/init.go:124","msg":"Set up system aspect","name":"networking"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517271.9656186,"caller":"init/init.go:133","msg":"Ensuring daemon is running..","name":"containerd"}
Sep 16 20:07:51 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:52 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:52 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:52 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:52 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:53 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:53 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:53 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:54 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:54 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:54 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:54 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:55 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:55 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:55 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:55 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:56 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:56 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:56 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:56 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:57 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:57 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:57 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:57 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:58 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:58 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:58 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:58 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:59 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:59 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:59 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:07:59 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:00 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:00 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:00 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:00 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of containerd.service : "activating"
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.810569,"caller":"init/init.go:137","msg":"Daemon is running","name":"containerd"}
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.810625,"caller":"init/init.go:139","msg":"Running post-launch tasks..","name":"containerd"}
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.810633,"caller":"containerd/sandbox.go:21","msg":"Looking up current sandbox image in containerd config.."}
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.9384398,"caller":"containerd/sandbox.go:33","msg":"Found sandbox image","image":"602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.5"}
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.948726,"caller":"containerd/sandbox.go:35","msg":"Fetching ECR authorization token.."}
Sep 16 20:08:01 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517281.9796603,"caller":"containerd/sandbox.go:49","msg":"Pulling sandbox image..","image":"602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.5"}
Sep 16 20:08:02 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517282.8603086,"caller":"containerd/sandbox.go:54","msg":"Finished pulling sandbox image","image-ref":"sha256:6996f8da07bd405c6f82a549ef041deda57d1d658ec20a78584f9f436c9a3bb7"}
Sep 16 20:08:02 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517282.8603635,"caller":"init/init.go:143","msg":"Finished post-launch tasks","name":"containerd"}
Sep 16 20:08:02 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517282.8603747,"caller":"init/init.go:133","msg":"Ensuring daemon is running..","name":"kubelet"}
Sep 16 20:08:02 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:03 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:03 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:03 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:03 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:04 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:04 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:04 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:04 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:05 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:05 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:05 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:05 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:06 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:06 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:06 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:06 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:07 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:07 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:07 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:07 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:08 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:08 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:08 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:08 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:09 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:09 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:09 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:09 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:10 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:10 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:10 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:10 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:11 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:11 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:11 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:11 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:12 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:12 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:12 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:12 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:13 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:13 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:13 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: status of kubelet.service : "activating"
Sep 16 20:08:14 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517294.0026217,"caller":"init/init.go:137","msg":"Daemon is running","name":"kubelet"}
Sep 16 20:08:14 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517294.002651,"caller":"init/init.go:139","msg":"Running post-launch tasks..","name":"kubelet"}
Sep 16 20:08:14 ip-172-31-15-250.us-west-2.compute.internal nodeadm[2131]: {"level":"info","ts":1726517294.0026739,"caller":"init/init.go:143","msg":"Finished post-launch tasks","name":"kubelet"}

See this guide for recommended testing for PRs. Some tests may not apply. Completing tests and providing additional validation steps are not required, but it is recommended and may reduce review time and time to merge.

@ndbaker1
Copy link
Member Author

/ci
+workflow:os_distros al2023

Copy link
Contributor

@ndbaker1 roger that! I've dispatched a workflow. 👍

Copy link
Contributor

@ndbaker1 the workflow that you requested has completed. 🎉

AMI variantBuildTest
1.23 / al2023success ✅failure ❌
1.24 / al2023success ✅failure ❌
1.25 / al2023success ✅failure ❌
1.26 / al2023success ✅failure ❌
1.27 / al2023success ✅failure ❌
1.28 / al2023success ✅failure ❌
1.29 / al2023success ✅failure ❌
1.30 / al2023success ✅failure ❌

@ndbaker1
Copy link
Member Author

/ci
+workflow:os_distros al2023

Copy link
Contributor

@ndbaker1 roger that! I've dispatched a workflow. 👍

Copy link
Contributor

@ndbaker1 the workflow that you requested has completed. 🎉

AMI variantBuildTest
1.23 / al2023success ✅success ✅
1.24 / al2023success ✅success ✅
1.25 / al2023success ✅success ✅
1.26 / al2023success ✅success ✅
1.27 / al2023success ✅success ✅
1.28 / al2023success ✅success ✅
1.29 / al2023success ✅success ✅
1.30 / al2023success ✅success ✅

@ndbaker1 ndbaker1 marked this pull request as ready for review September 16, 2024 22:09
nodeadm/internal/daemon/interface.go Outdated Show resolved Hide resolved
nodeadm/internal/kubelet/daemon.go Outdated Show resolved Hide resolved
@@ -55,7 +55,7 @@ func (m *systemdDaemonManager) GetDaemonStatus(name string) (DaemonStatus, error
if err != nil {
return DaemonStatusUnknown, err
}
switch status.Value.String() {
switch status.Value.Value().(string) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not understanding this change. If our existing code was mishandling this, wouldn't we always get DaemonStatusUnknown back?

Copy link
Member Author

@ndbaker1 ndbaker1 Sep 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we didn't have any callers of GetDaemonStatus before this, but when i tried the original code i did just get unknown for everything. so it just never worked 🫠
per comments in the overview, the previous impl. just prints the value's pointer address as a string

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha gotcha, didn't see that this func was unused previously 👍

@@ -102,3 +110,19 @@ func (m *systemdDaemonManager) Close() {
func getServiceUnitName(name string) string {
return fmt.Sprintf("%s.service", name)
}

func (m *systemdDaemonManager) waitForStatus(ctx context.Context, unitName string, targetStatus DaemonStatus) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

much cleaner, thx

Copy link
Member

@cartermckinnon cartermckinnon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @ndbaker1 can you make sure a fresh CI run goes green before you merge?

@ndbaker1
Copy link
Member Author

/ci
+workflow:os_distros al2023

Copy link
Contributor

@ndbaker1 roger that! I've dispatched a workflow. 👍

Copy link
Contributor

@ndbaker1 the workflow that you requested has completed. 🎉

AMI variantBuildTest
1.23 / al2023success ✅failure ❌
1.24 / al2023success ✅failure ❌
1.25 / al2023success ✅failure ❌
1.26 / al2023success ✅failure ❌
1.27 / al2023success ✅failure ❌
1.28 / al2023success ✅failure ❌
1.29 / al2023success ✅failure ❌
1.30 / al2023success ✅failure ❌

@ndbaker1
Copy link
Member Author

should be the last one 🤞

/ci
+workflow:os_distros al2023

Copy link
Contributor

@ndbaker1 roger that! I've dispatched a workflow. 👍

@cartermckinnon
Copy link
Member

fyi @ndbaker1 you should be able to just use slash ci, it'll target al2023 if the only change is in nodeadm 👍

Copy link
Contributor

@ndbaker1 the workflow that you requested has completed. 🎉

AMI variantBuildTest
1.23 / al2023success ✅success ✅
1.24 / al2023success ✅success ✅
1.25 / al2023success ✅success ✅
1.26 / al2023success ✅success ✅
1.27 / al2023success ✅success ✅
1.28 / al2023success ✅success ✅
1.29 / al2023success ✅success ✅
1.30 / al2023success ✅success ✅

@ndbaker1 ndbaker1 changed the title nodeadm: add retry until daemon is detected running nodeadm: block until daemon status changes are reflected Sep 17, 2024
@ndbaker1 ndbaker1 merged commit a680e63 into awslabs:main Sep 17, 2024
9 of 10 checks passed
@ndbaker1 ndbaker1 deleted the running branch September 17, 2024 05:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants