-
Notifications
You must be signed in to change notification settings - Fork 499
CNF-13731: Cert Manager HTTP01 Proxy #1773
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
CNF-13731: Cert Manager HTTP01 Proxy #1773
Conversation
@sebrandon1: This pull request references CNF-13731 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Hi @sebrandon1. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@sebrandon1: This pull request references CNF-13731 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
faf4430
to
b77ce7e
Compare
It looks like the
I'm not sure what the policy is on removing certain sections all together, but at the very least I would expect to still see the following sections:
If you need a more explicit template to follow for this EP, there is one with some good general advice for what should be in each section here: https://github.com/openshift/enhancements/blob/master/guidelines/enhancement_template.md |
Stale enhancement proposals rot after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle rotten |
ae29d68
to
21b454b
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
||
### Implementation Details/Notes/Constraints | ||
|
||
- The proxy will be deployed as a DaemonSet to ensure it runs on all nodes which may host the API VIP in the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will deploy the DaemonSet
? Will this be a default component on OpenShift clusters? Does it need to be a part of the core payload or can it be a "layered product"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mvazquezc can correct me if I'm wrong but this will be an optional feature that Telco customers can enable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What approaches have you considered for allowing this to be an opt-in feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the idea was to make it part of the core payload. The DaemonSet could be deployed by the OpenShift API Server Operator maybe?
By default we don't expect the proxy to be deployed. This will be opt-in feature that the customer needs to activate through the use of the proposed api HTTP01Proxy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming we need to agree on how/who/what deploys the daemonset here before the enhancement is finalized right?
|
||
- The proxy will be deployed as a DaemonSet to ensure it runs on all nodes which may host the API VIP in the cluster. | ||
- The nftables rules will be added to the nodes. The proxy will listen on port 8888 and redirect traffic to the OpenShift Ingress Routers. | ||
- The implementation relies on `nftables` for traffic redirection, which must be supported and enabled on the cluster nodes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What should happen if nftables
is not enabled on the cluster nodes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK nftables is required for OCP nodes to run. OVN-K requires it (maybe other CNI don't), but still I don't think it can be disabled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any references that state it cannot be disabled? Isn't nftables
a systemd service that runs on the nodes? Can someone modify the configurations that MCO applies to mark it as disabled?
Maybe we should consult someone from the MCO team to verify whether or not nftables
can be disabled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe @yuqi-zhang ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The MCO doesn't directly manage nftables
in any way on the nodes. I believe it's being shipped (and mostly configured) in RHCOS directly.
As a user, you can technically do whatever you want, including disabling nftables via a MachineConfig, but that would be an explicit user-driven decision and not something we manage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the additional information @yuqi-zhang !
With that in mind - and no other information provided on it being required - I'd like to have included in the considerations of this EP what should happen if nftables
isn't present or is disabled and how we plan to detect that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nftables is always present, it is part of the RHCOS payload.
Baremetal and AWS (at least) OCP clusters install nftables tables, chains and rules. Execute $ nft list ruleset
on any node to list rules. OVN-Kubernetes relies on netfilter (ipables/nftables) for features like UDN, Egress and Services.
In this thread, it is mentioned a few times nftables being enabled/disabled. Note that the nftables
systemd unit is disabled out of the box but netfilter is on and can be configured via the nftables
CLI/API without the nftables
systemd unit being enabled.
The nftables
systemd service is just a helper to faciliate the load/unload of custom firewall rules:
[Unit]
Description=Netfilter Tables
Documentation=man:nft(8)
Wants=network-pre.target
Before=network-pre.target
[Service]
Type=oneshot
ProtectSystem=full
ProtectHome=true
ExecStart=/sbin/nft -f /etc/sysconfig/nftables.conf
ExecReload=/sbin/nft 'flush ruleset; include "/etc/sysconfig/nftables.conf";'
ExecStop=/sbin/nft flush ruleset
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's include some of this context in the enhancement.
I think it is important for us to document and understand if it is possible for the feature we are relying on being present to not be present based on a user configuration. If it is not possible, lets explicitly detail that. If it is possible, lets document that and how we take that into consideration in this design.
From this additional information, it sounds like it is not possible to remove the thing we would be relying on here.
message: "HTTP01ChallengeProxy is ready" | ||
``` | ||
|
||
### Implementation Details/Notes/Constraints |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some comments with some further questions on how this is being done, but I'd like to see a more descriptive section here that explains how certain aspects are going to be implemented.
For example, does this need to be a core component of the OpenShift payload that is shipped with every cluster? If so, you may want to consider the cluster operator approach: https://github.com/openshift/enhancements/blob/master/dev-guide/operators.md
If not, could this be implemented as an operator installable through OLM?
Should multiple instances of a proxy be allowed to run? If so, what are some issues that may be encountered when running multiple instances? How do you plan to mitigate these issues?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answering only the last part, will let other chime in for the first part.
No, multiple instances of a proxy are not allowed to run. There will be port collisions since the port used (8888) will be listening on the host network namespace. This can be mitigated by checking if the port is in use when initializing the proxy. During upgrades, etc, the idea is use the recreate
strategy so the existing proxy gets deleted before the new one gets created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you plan to limit that only a singular instance of a proxy should run? Is this one proxy per node? one proxy per cluster?
If there is risk of port collision, would it make sense to allow configuration of the port from the user perspective?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Echo'ing from above, if the api for this feature is cluster scoped and allows configuration of the internal port used I think we avoid both risks.
To me, delivering this as part of the openshift payload makes sense, but a full ClusterOperator or standalone olm operator feels heavyweight for the function. Adding this functionality to an existing component would likely fall to either the api server or cert manager.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It certainly doesn't have to be a full ClusterOperator or a standalone OLM operator. You just need to further specify in this enhancement what component(s) will be responsible for ensuring the proxy gets stamped out onto the cluster and under which conditions.
|
||
## Upgrade / Downgrade Strategy | ||
|
||
Updated versions of the proxy can be applied to the cluster similar to initial deployment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Beyond the proxy itself, how do you plan to handle typical upgrade situations where there may older and newer instances of the proxy running at the same time due to a rolling upgrade? Are there any special considerations needed for this scenario?
What happens if an upgrade fails midway through? What steps would a user need to take to get their cluster back to the previous state?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't have multiple instances of the proxy running. They listen on port :8888 in the host network. We can control that with a recreate
policy for the deployment/daemonset. In terms of new/old versions the only thing that may be changing (and I doubt it) is the nftable rule that redirects traffic from port 80 to proxy port 8888. But still we should ensure backwards compatibility in the proxy code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding additional context here.
|
||
## Version Skew Strategy | ||
|
||
Any changes to the proxy's behavior will be documented to ensure compatibility with older cluster versions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does there need to be any special considerations for the interactions between the component that deploys the proxy and updates node configurations and the component that rolls out node configuration changes?
During upgrades there will always be version skew between components, so capturing how your component handles this is helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mvazquezc Mentioned in another comment that there will be some checking of the OCP version as part of the proxy deploy.
|
||
## Support Procedures | ||
|
||
Support for the proxy will be provided through standard OpenShift support channels. Administrators can refer to the deployment documentation and logs for troubleshooting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to see a bit more information here "standard support channels" isn't all that helpful in representing how someone providing support can evaluate if this component is behaving correctly or not and what steps they can take to mitigate issues that it may be causing on the system.
The template section has some good examples for things to take into consideration: https://github.com/openshift/enhancements/blob/master/guidelines/enhancement_template.md#support-procedures
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated based on template.
Rotten enhancement proposals close after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Reopen the proposal by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/reopen |
@sebrandon1: Reopened this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@sebrandon1: This pull request references CNF-13731 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.20.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
bc39b89
to
ab590b5
Compare
0975367
to
67d3e63
Compare
@sebrandon1: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Draft for adding an enhancement for the Cert Manager HTTP01 proxy.
Based on #1682 for inspiration. Thanks @swghosh for pointing me to that.
cc @mvazquezc