-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import gke-certificates-controller from k8s.io/kubernetes #1
Conversation
This adds a new stand-alone certificates controller for use on GKE. It allows calling GKE to sign certificates instead of requiring the CA private key locally. It does not aim for 100% feature parity with kube-controller-manager yet, so for instance, leader election support is omitted.
On errors, the GKE signing API can respond with a JSON body that contains an error message explaining the failure. If we're able to extract it, use that message when reporting the error instead of the generic error returned by the webhook library. Also, always add an event to the CSR object on signing errors.
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
/assign |
This controller binary is being relocated from the main repository and runs in GKE only right now. This PR removed it from the main repo: kubernetes/kubernetes#59933 |
I'm interested if we can remove to dependencies to k8s.io/kubernetes packages which weren't designed to be vendored. I think it's probably best to do that in a followup though. /lgtm |
/assign @nicksardo |
It's safe to ignore cla bot because all these commits exist in k8s.io/kubernetes already. This is only a code move. bgrant0607, cblecker and thockin signed off on this. |
@mikedanese I added you as an |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mikedanese The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Does it make sense to throw everything into cloud provider? We are creating another monolith unless I'm looking at this incorrectly. Cloud provider as I understand should be only the base layer stuff (i.e. pkg/cloudprovider/providers/gce). |
/hold |
@bowei are you proposing the gcp-cloud-controller-manager main does not live in this repository? |
Yes -- for example, Ingress is going to vendor this repo and I'm assuming the other controllers will as well, would be good to avoid coupling all these together again into a mono-repo since we are moving away from that. |
Is the unit of vendoring a package or a repository? Application code for the gke certificates controller being imported right now lives in cmd/gke-certificates-controller/... This is a pattern that we didn't follow with the main repo but have started to implement with newer components like kubeadm. I suspect library code for the cloudprovider would go in pkg/gcp/... or something. With import enforcement I think we can have both a vendorable, reusable cloudprovider library and cloud controller-manager mains in the same repo. However, I haven't been tuned in to the discussion. @cheftako @calebamiles can you comment on what the purpose of this repo is? |
Ping @mikedanese @bowei can we merge this? |
This is blocking other work. @bowei can you sync with @cheftako @calebamiles ? The current intention is to locate mains in this repo. If someone gives us another place, we can move this code there. This code is intended to run in the gcp-cloud-controller-manager. |
/unhold |
/hold cancel |
…r@v1.17.4 Update k8s.io/legacy-cloud-providers to v0.17.4
In a fresh GCP project, kube-up.sh will fail on two runs before succeeding. This is because it copies two files to staging during its run. When the file kubernetes#1 is copied to the bucket, we hit this ACL-setting line which fails for weird bucket permission errors. The next run, kube-up discovers that file kubernetes#1 is already in the bucket and doesn't call copy-to-staging, proceeding instead to copying file kubernetes#2 to staging. The same error occurs. On the third run, both files are determined to be in the bucket and available and so the script proceeds without incident and completes a successful run. With this addition of || true, multiple runs are not necessary.
UPSTREAM: <carry>: Change upstream OWNERS
Ported over the commit history and added vendored deps.