Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Offline Installation with secure file repo and registry #10294

Open
mrmcmuffinz opened this issue Jul 17, 2023 · 7 comments
Open

Offline Installation with secure file repo and registry #10294

mrmcmuffinz opened this issue Jul 17, 2023 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@mrmcmuffinz
Copy link

mrmcmuffinz commented Jul 17, 2023

Objective:

I'm trying to install k8s using kubespray with offline mode.

Observations:
I have been able to successfully bootstrap a cluster using k8s and offline mode however it wasn't without some issues and seeking guidance on how to implement some of the "hacks" I put in place.

Questions:
1. How do you properly set the username and password for "{{ file_repos }}" used to download the binaries?

What did I to workaround this?
Redefine this section https://github.com/kubernetes-sigs/kubespray/blob/release-2.21/roles/download/defaults/main.yml#L1985-L1996 in my inventory and set the values for username and password. But I feel like this may not be right way and seeking guidance on what the right way would be.

Suggestion
Update the docs for offline use that explain how to properly setup the authentication aspects on the file repo.

2. I have a similar situation with the containers downloaded for offline use. I have archived all the docker containers into a secure private registry on prem. The issue is it does not support unauthenticated/anonymous requests and apart from that depending on the container runtime you use, in my case default containerd, you also don't have the cli installed by default on your k8s cluster. How do you solve this chicken and egg problem? I don't see a configuration or variables to specify for nerdctl that would allow the playbook to login before it attempts to pull down the images from the secure registry. In https://github.com/kubernetes-sigs/kubespray/blob/release-2.21/roles/download/tasks/download_container.yml#L56 I also don't see any code that allows me to login via nerdctl. I also don't see it in https://github.com/kubernetes-sigs/kubespray/blob/release-2.21/roles/download/tasks/prep_download.yml which is a bit odd.

What did I to workaround this?
This one was particularly egregious to me but since I'm doing it in my dev environment I did it once to understand how this all works. After I figured out that I had to log into my private secure registry, and the kubepsray playbook failed. I ssh'ed into each of the nodes in my cluster(3) and did a manual nerdctl login and reran the kubespray playbook. While this works, this solution does not scale. And I also don't think it is a good idea for me to write mine own playbook to do a login after the fact.

Suggestion:

  1. I think what could be done for this situation is to possible split up this task https://github.com/kubernetes-sigs/kubespray/blob/release-2.21/roles/download/tasks/main.yml#L19 into two. One for binaries first and another for images third. Now in between the two tasks we could have an optional login task for your offline secure registry. This would also have to take into account the container runtime and binary you use. What I don't know is if nerdctl can use the /etc/containerd.conf config file for logging into the registry but in my case I tried to look into this initially and couldn't find anything.

Thank you for reading my giant wall of text, I hope that I was able to convey myself and look forward to your guidance response.

Thanks,

-MrMcMuffinz.

@mrmcmuffinz mrmcmuffinz added the kind/support Categorizes issue or PR as a support question. label Jul 17, 2023
@ErikJiang
Copy link
Member

for containerd login authentication, see: containerd_registry_auth, or you can add insecure registries via containerd_insecure_registries.

file_repos doesn't seem to provide an authentication parameter at the moment, so if you're in an offline environment, using minio as an example, setting the relevant bucket policy to public is recommended.
😀

@hamedsol
Copy link

my main restriction is just an internet proxy for my dev environment , do uou now where may i set the proxy in playbooks to pass the internet firewalls ?

@VannTen
Copy link
Contributor

VannTen commented Feb 9, 2024

/remove-kind support
/kind feature
(which would be to support authenticated access during download for offline mode)

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/support Categorizes issue or PR as a support question. labels Feb 9, 2024
@cello86
Copy link

cello86 commented Apr 8, 2024

Are there any possibilities to configure PATs for the ghcr.io repo used for downloading kubernetes images?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 6, 2024
@VannTen
Copy link
Contributor

VannTen commented Aug 27, 2024

/remove-lifecycle rotten
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

7 participants