You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. What kops version are you running? The command kops version, will display
this information.
Client version: 1.29.2 (git-v1.29.2)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Server Version: v1.29.7
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
We are configuring local file asset repository however we are running into an issue when trying to update the cluster.
We tried to work around #16759 by specifying fileRepository as an S3 URL (even though the docs suggest this should not work) and to my surprise kOps accepted it and allowed us to apply it to the cluster.
However upon rolling the first control-plane node it did not come online and the update failed (some what expected).
Enable fileRepository in the Cluster spec (using an S3 URL as shown below)
Copy the file assets kops get assets --copy
Update the cluster kops update cluster
Roll an instance group kops rolling-update cluster
5. What happened after the commands executed?
New node fails to join the cluster and cluster validation fails.
Upon SSH'ing into the new node and checking the logs via journalctl -u cloud-final.service we see:
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == nodeup node config starting ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Downloading nodeup with hash 73c2808ac814787ccca9671678d2919fc9023322c3b834ea47b3cef01d6841cb from s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Downloading s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup using curl -f --compressed -Lo nodeup --connect-timeout 20 --retry 6 --retry-delay 10 ==
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: curl: (1) Protocol "s3" not supported or disabled in libcurl
Aug 18 23:09:04 i-09ec9f0eec1f3fd13 cloud-init[1272]: == Failed to download s3://example-k8s-assets/kops/binaries/kops/1.29.2/linux/amd64/nodeup using curl -f --compressed -Lo nodeup --connect-timeout 20 --retry 6 --retry-delay 10 ==
6. What did you expect to happen?
I expected the validation of a S3 URL in fileRepository to fail before we could apply the changes to the cluster, and have new nodes failing to start.
7. Please provide your cluster manifest. Execute kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The text was updated successfully, but these errors were encountered:
/kind bug
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Client version: 1.29.2 (git-v1.29.2)
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.Server Version: v1.29.7
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
We are configuring local file asset repository however we are running into an issue when trying to update the cluster.
We tried to work around #16759 by specifying
fileRepository
as an S3 URL (even though the docs suggest this should not work) and to my surprise kOps accepted it and allowed us to apply it to the cluster.However upon rolling the first control-plane node it did not come online and the update failed (some what expected).
fileRepository
in the Cluster spec (using an S3 URL as shown below)kops get assets --copy
kops update cluster
kops rolling-update cluster
5. What happened after the commands executed?
New node fails to join the cluster and cluster validation fails.
Upon SSH'ing into the new node and checking the logs via
journalctl -u cloud-final.service
we see:6. What did you expect to happen?
I expected the validation of a S3 URL in
fileRepository
to fail before we could apply the changes to the cluster, and have new nodes failing to start.7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.You may want to remove your cluster name and other sensitive information.
8. Please run the commands with most verbose logging by adding the
-v 10
flag.Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
The text was updated successfully, but these errors were encountered: