-
Notifications
You must be signed in to change notification settings - Fork 502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to v18.4 failed, cannot install both kubelet-1.18.4-1.x86_64 and kubelet-1.18.4-0.x86_64 #3044
Comments
/sig Release |
This helps me as a short-term solution. but does not fix the actual packet conflict.
|
@StefanSa -- What happens when you completely remove
(Related to kubernetes/kubernetes#92242.) |
Due to a bug in k8s there is a conflict issue during provisioning. https://github.com/kubernetes/kubernetes/issues/92463 Signed-off-by: Or Shoval <oshoval@redhat.com>
Due to a bug in k8s there is a conflict issue during provisioning. https://github.com/kubernetes/kubernetes/issues/92463 Signed-off-by: Or Shoval <oshoval@redhat.com>
@justaugustus
As already mentioned, we don't see that on our nodes that are based on centos 7, currently we only see this problem with the master that is based on centos 8. |
* Update k8s 1.18 install to overcome conflict errors Due to a bug in k8s there is a conflict issue during provisioning. https://github.com/kubernetes/kubernetes/issues/92463 Signed-off-by: Or Shoval <oshoval@redhat.com> * Update k8s 1.17 install to overcome conflict errors Due to a bug in k8s there is a conflict issue during provisioning. https://github.com/kubernetes/kubernetes/issues/92463 Signed-off-by: Or Shoval <oshoval@redhat.com> * Update K8s 1.17 and 1.18 hashes Signed-off-by: Or Shoval <oshoval@redhat.com>
There does still seem to be an issue on centos8 I had the same issue when testing with 1.17.8, 1.18.1, 1.18.3, and 1.18.5
|
I still had same issue with 1.18.6 upgade to 1.19.0. I use centos 8.
|
I add follow config to
then use I hope it can help you. |
@huataihuang What repository url do you have? I've been using https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 (which is what the kubernetes website tells me to use) and it doesn't seem to have kubelet-1.19 in it... |
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 here is my
I was follow kubernetes website guide to deploy, and I add exclude config by my experience, maybe you not meet this version conflict. |
/assign @saschagrunert Sascha -- can you take a look or assign to someone on @kubernetes/release-engineering to investigate? The following issues are similar classes of issues for apt/yum:
|
/help |
@saschagrunert: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It did help, quite a lot. Thanks for this. What I ended up doing, was to I am adding this to my ansible kubernetes deployment playbooks. Sweet and short. Thanks a ton! |
This is it thanks !!! |
I have the same issue with Alma Linux 9.1:
The workaround with the exclude fixes it:
|
same queastion in CentOS 9 Stream root in ~
≥ uname -a 9:01
Linux master 5.14.0-307.el9.x86_64 kubernetes/kubernetes#1 SMP PREEMPT_DYNAMIC Wed May 3 06:16:28 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root in ~
≥ yum upgrade 8:44
上次元数据过期检查:0:06:42 前,执行于 2023年05月10日 星期三 08时38分13秒。
错误:
问题: 无法同时安装 kubelet-1.18.4-0.x86_64 和 kubelet-1.27.1-0.x86_64
- 无法为软件包安装最佳更新候选 kubernetes-cni-1.2.0-0.x86_64
- 无法为软件包安装最佳更新候选 kubelet-1.27.1-0.x86_64
(尝试在命令行中添加 '--allowerasing' 来替换冲突的软件包 或 '--skip-broken' 来跳过无法安装的软件包 或 '--nobest' 来不只使用软件包的最佳候选) |
last supported version is 1.24 https://kubernetes.io/releases/ There is no support for upgrading since 1.18 to any of the supported versions https://kubernetes.io/releases/version-skew-policy/#supported-component-upgrade-order /close |
@aojea: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@aojea I think there's a misunderstanding. This bug applies to the latest versions even though the error message presented incorrectly mentions 1.18. The problem is that running "yum upgrade" will attempt to install 1.18 even if you are using a much newer version already. |
Yes, the bug is not related to whether 1.18 is supported or not. It still happens when trying to update to latest Kubernetes. Please reopen. PS: actually removing unsupported Kubernetes versions from the main yum repo may help. Another option is to have separate repositories for each of the major releases. /reopen |
@vrusinov: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@aojea please reopen. The title of this issue is incorrect, this happens when trying to update to any latest version. Even from 1.25.x to 1.26.x. |
all the snippets in the code are with 1.18, can you paste a snippet with those versions ? |
install 1.26 ... from 1.18 anything that is not on a supported version (latest 1.24) and within the supported skew (n-2 versions respect the control-plane) is not supported #3044 |
@aojea No, the issue occurs no matter what the original and target versions are. No matter what version you are upgrading from or to, 1.18 will incorrectly get marked for installation. The issue does not require 1.18 to be the original or target version in the upgrade. Even if you are upgrading from 1.25 to 1.26 for example, it will still mark 1.18 to be installed if you use "yum update". |
lol, I didn't understand that , sorry /reopen do we have 1.18 hardcoded somewhere? |
@aojea: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think that the rpms sources are here https://github.com/kubernetes/release/tree/master/packages/rpm /transfer kubernetes/release |
@aojea: Something went wrong or the destination repo kubernetes/kubernetes/release does not exist. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/transfer release |
I could be wrong here, but I don't think it is an issue with RPM themselves, but rather how dependency tree is generated on Kubernetes' Yum repository. Probably an easy way to check, just remove those 1.16.x, 1.17.x, and 1.18.x RPMs from repository, and regenerate yum dependencies. Those old versions should not be used anyway. |
Can confirm that the behavior suspected by @ViliusS above is indeed correct. I setup a local copy of the Kubernetes repository with only the latest packages. As demonstrated by the output below, when the repository only contains the 1.27.x packages, there is no dependency issue.
Also, using the official repository that contains all versions, I am able to reproduce the behavior initial indicated in this issue. |
As new repositories do not include Kubernetes 1.16, 1.17, or 1.18 this issue can be closed. |
Still facing similar in 2024, I used a wild card exclude to ignore all 1.1 versions
|
@solutionstack I don't expect there being any further change on the repo content here, if you didn't notice the location of the repos has been moved last year to a different location, with individual versioned repos, and the old content got frozen. |
What happened:
on the master
centos-8
there is the following error message when trying to update the packages.We don't see this problem in the nodes
centos-7
Likewise, we did not see this error in the master
centtos-8
in the last update to v18.3.How to reproduce it (as minimally and precisely as possible):
yum update / dnf update
Environment:
kubectl version
):cat /etc/os-release
): centos-8The text was updated successfully, but these errors were encountered: