Skip to content
This repository has been archived by the owner on Oct 24, 2023. It is now read-only.

Commit

Permalink
Use registry.k8s.io for components (#5071)
Browse files Browse the repository at this point in the history
  • Loading branch information
mboersma authored Feb 21, 2023
1 parent 4abc935 commit fb9d128
Show file tree
Hide file tree
Showing 34 changed files with 106 additions and 106 deletions.
18 changes: 9 additions & 9 deletions docs/design/custom-container-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
The existing AKS Engine Kubernetes component container image configuration surface area presents obstacles in the way of:

1. quickly testing/validating specific container images across the set of Kubernetes components in a working cluster; and
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated k8s.gcr.io container images.
2. using Azure Container Compute Upstream-curated MCR container images instead of Kubernetes SIG-Release-curated registry.k8s.io container images.

## Proximate Problem Statements

Expand All @@ -14,15 +14,15 @@ The existing AKS Engine Kubernetes component container image configuration surfa
- https://github.com/Azure/aks-engine/issues/2378
2. At present, the "blessed" component configuration image URIs are maintained via a concatenation of two properties:
- A "base URI" property (`KubernetesImageBase` is the property that has the widest impact across the set of component images)
- e.g., `"k8s.gcr.io/"`
- e.g., `"registry.k8s.io/"`
- A hardcoded string that represents the right-most concatenation substring of the fully qualified image reference URI
- e.g., `"kube-proxy:v1.16.1"`

In summary, in order to render `"k8s.gcr.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"k8s.gcr.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).
In summary, in order to render `"registry.k8s.io/kube-proxy:v1.16.1"` as the desired container image reference to derive the kube-proxy runtime, we set the KubernetesImageBase property to `"registry.k8s.io/"`, and rely upon AKS Engine to append `"kube-proxy:v1.16.1"` by way of its hardcoded authority in the codebase for the particular version of Kubernetes in the cluster configuration (1.16.1 in this example).

In practice, this means that the `KubernetesImageBase` property is effectively a "Kubernetes component image registry mirror base URI" property, and in fact this is exactly how that property is leveraged, to redirect container image references to proximate origin URIs when building clusters in non-public cloud environments (e.g., China Cloud, Azure Stack).

To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a k8s.gcr.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.
To conclude with a concrete problem statement, it is this: the current accommodations that AKS Engine provides for redirecting Kubernetes component container images to another origin assume a registry.k8s.io container registry mirror. This presents a problem w/ respect to migrating container image configuration to an entirely different container registry URI reference specification, which is what the MCR container image migration effort effectively does.

# A Proposed Solution

Expand Down Expand Up @@ -98,9 +98,9 @@ In summary, we will introduce a new "components" configuration interface (a sibl

~

Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the k8s.gcr.io container registry origin, or a mirror that follows its specification".
Now we have addressed the problem of "how to quickly test and validate specific container images across the set of Kubernetes components in a working cluster", which is a critical requirement for the Azure Container Compute Upstream effort to maintain and curate Kubernetes component container images for AKS and AKS Engine. Next we have to address the problem of "how to re-use existing AKS Engine code to introduce a novel mirror specification (MCR) while maintaining backwards compatibility with existing clusters running images from gcr; and without breaking any existing users who are not able to convert to MCR (or don’t want to), and must rely upon the registry.k8s.io container registry origin, or a mirror that follows its specification".

As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "k8s.gcr.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of k8s.gcr.io.
As stated above, the main point of friction is that the configuration vector currently available to "redirect" the base URI of the origin for sourcing Kubernetes component images assumes, in practice, a "registry.k8s.io mirror". The MCR container registry origin that is being bootstrapped by the Azure Container Compute Upstream team right now does not match that assumption, and thus we can’t simply re-use the existing configurable space to "migrate to MCR images" (e.g., we cannot simply change the value of `KubernetesImageBase` to `"mcr.microsoft.com/oss/kubernetes/"`, because "mcr.microsoft.com/oss/kubernetes/" is not a mirror of registry.k8s.io.

What we can do is add a "mirror type" (or "mirror flavor", if you prefer) configuration context to the existing `KubernetesImageBase` property, allowing us to maintain easy backwards-compatibility (by keeping that property valid), and then adapt the underlying hardcoded "image URI substring" values to be sensitive to that context.

Expand All @@ -111,12 +111,12 @@ Concretely, we could add a new sibling (of KubernetesImageBase) configuration pr

The value of that property tells the template generation code flows to generate container image reference URI strings according to one of the known specifications supported by AKS Engine:

- k8s.gcr.io
- e.g., `"k8s.gcr.io/kube-addon-manager-amd64:v9.0.2"`
- registry.k8s.io
- e.g., `"registry.k8s.io/kube-addon-manager-amd64:v9.0.2"`
- mcr.microsoft.com/oss/kubernetes
- e.g., `"mcr.microsoft.com/oss/kubernetes/kube-addon-manager:v9.0.2"`

The above solution would support a per-environment migration from the current, known-working k8s.gcr.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.
The above solution would support a per-environment migration from the current, known-working registry.k8s.io mirrors (including the origin) to the newly created MCR mirror specification (including unlocking the creation of new MCR mirrors, e.g., in China Cloud, usgov cloud, etc). This refactor phase we’ll call **Enable MCR as an Additive Kubernetes Container Image Registry Mirror**.

# A Proposed Implementation

Expand Down
8 changes: 4 additions & 4 deletions docs/topics/azure-api-throttling.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ So, assuming we've waited 30 minutes or so, let's update the controller-manager

```
azureuser@k8s-master-31453872-0:~$ grep 1.15.7 /opt/azure/kube-controller-manager.yaml
image: k8s.gcr.io/hyperkube-amd64:v1.15.7
image: registry.k8s.io/hyperkube-amd64:v1.15.7
```

Let's update the spec on all control plane VMs:
Expand All @@ -124,7 +124,7 @@ Authorized uses only. All activity may be monitored and reported.
Authorized uses only. All activity may be monitored and reported.
azureuser@k8s-master-31453872-0:~$ grep 1.15.12 /opt/azure/kube-controller-manager.yaml
image: k8s.gcr.io/hyperkube-amd64:v1.15.12
image: registry.k8s.io/hyperkube-amd64:v1.15.12
```

(Again, if you're using `cloud-controller-manager`, substitute the correct `cloud-controller-manager.yaml` file name.)
Expand All @@ -135,7 +135,7 @@ Now, if we're running the `cluster-autoscaler` addon on this cluster let's make

```
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
- image: k8s.gcr.io/cluster-autoscaler:v1.15.3
- image: registry.k8s.io/cluster-autoscaler:v1.15.3
azureuser@k8s-master-31453872-0:~$ for control_plane_vm in $(kubectl get nodes | grep k8s-master | awk '{print $1}'); do ssh $control_plane_vm "sudo sed -i 's|v1.15.3|v1.15.6|g' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml"; done
Authorized uses only. All activity may be monitored and reported.
Expand All @@ -144,7 +144,7 @@ Authorized uses only. All activity may be monitored and reported.
Authorized uses only. All activity may be monitored and reported.
azureuser@k8s-master-31453872-0:~$ grep 'cluster-autoscaler:v' /etc/kubernetes/addons/cluster-autoscaler-deployment.yaml
- image: k8s.gcr.io/cluster-autoscaler:v1.15.6
- image: registry.k8s.io/cluster-autoscaler:v1.15.6
```

The above validated that we *weren't* using the latest `cluster-autoscaler`, and so we changed the addon spec on each control plane VM in the `/etc/kubernetes/addons/` directory so that we would load 1.15.6 instead.
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/calico-3.3.1-cleanup-after-upgrade.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -337,7 +337,7 @@ spec:
supplementalGroups: [ 65534 ]
fsGroup: 65534
containers:
- image: k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
- image: registry.k8s.io/cluster-proportional-autoscaler-amd64:1.1.2-r2
name: autoscaler
command:
- /cluster-proportional-autoscaler
Expand Down
2 changes: 1 addition & 1 deletion docs/topics/clusterdefinitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ $ aks-engine get-versions
| gcLowThreshold | no | Sets the --image-gc-low-threshold value on the kublet configuration. Default is 80. [See kubelet Garbage Collection](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/) |
| kubeletConfig | no | Configure various runtime configuration for kubelet. See `kubeletConfig` [below](#feat-kubelet-config) |
| kubeReservedCgroup | no | The name of a systemd slice to create for containment of both kubelet and the container runtime. When this value is a non-empty string, a file will be dropped at `/etc/systemd/system/$KUBE_RESERVED_CGROUP.slice` creating a systemd slice. Both kubelet and docker will run in this slice. This should not point to an existing systemd slice. If this value is unspecified or specified as the empty string, kubelet and the container runtime will run in the system slice by default. |
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `k8s.gcr.io/` |
| kubernetesImageBase | no | Specifies the default image base URL (everything preceding the actual image filename) to be used for all kubernetes-related containers such as hyperkube, cloud-controller-manager, kube-addon-manager, etc. e.g., `registry.k8s.io/` |
| loadBalancerSku | no | Sku of Load Balancer and Public IP. Candidate values are: `basic` and `standard`. If not set, it will be default to "standard". NOTE: Because VMs behind standard SKU load balancer will not be able to access the internet without an outbound rule configured with at least one frontend IP, AKS Engine creates a Load Balancer with an outbound rule and with agent nodes added to the backend pool during cluster creation, as described in the [Outbound NAT for internal Standard Load Balancer scenarios doc](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-rules-overview#outbound-nat-for-internal-standard-load-balancer-scenarios) |
| loadBalancerOutboundIPs | no | Number of outbound IP addresses (e.g., 3) to use in Standard LoadBalancer configuration. If not set, AKS Engine will configure a single outbound IP address. You may want more than one outbound IP address if you are running a large cluster that is processing lots of connections. See [here](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-outbound-connections#multifesnat) for more documentation about how adding more outbound IP addresses can increase the number of SNAT ports available for use by the Standard Load Balancer in your cluster. Note: this value is only configurable at cluster creation time, it can not be changed using `aks-engine upgrade`.|
| networkPlugin | no | Specifies the network plugin implementation for the cluster. Valid values are:<br>`"azure"` (default), which provides an Azure native networking experience <br>`"kubenet"` for k8s software networking implementation. <br> `"cilium"` for using the default Cilium CNI IPAM (requires the `"cilium"` networkPolicy as well)<br> `"antrea"` for using the Antrea network plugin (requires the `"antrea"` networkPolicy as well) |
Expand Down
2 changes: 1 addition & 1 deletion examples/addons/node-problem-detector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ To test node-problem-detector in a running cluster, you can inject messages into
| Name | Required | Description | Default Value |
| -------------- | -------- | --------------------------------- | ----------------------------------------- |
| name | no | container name | "node-problem-detector" |
| image | no | image | "k8s.gcr.io/node-problem-detector:v0.8.1" |
| image | no | image | "registry.k8s.io/node-problem-detector:v0.8.1" |
| cpuRequests | no | cpu requests for the container | "20m" |
| memoryRequests | no | memory requests for the container | "20Mi" |
| cpuLimits | no | cpu limits for the container | "200m" |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ kubeStateMetrics:
## kube-state-metrics container image
##
image:
repository: k8s.gcr.io/kube-state-metrics
repository: registry.k8s.io/kube-state-metrics
tag: v1.2.0
pullPolicy: IfNotPresent

Expand Down
4 changes: 2 additions & 2 deletions pkg/api/azenvtypes.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ type AzureEnvironmentSpecConfig struct {
// KubernetesSpecConfig is the kubernetes container images used.
type KubernetesSpecConfig struct {
AzureTelemetryPID string `json:"azureTelemetryPID,omitempty"`
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream k8s.gcr.io
// KubernetesImageBase defines a base image URL substring to source images that originate from upstream registry.k8s.io
KubernetesImageBase string `json:"kubernetesImageBase,omitempty"`
TillerImageBase string `json:"tillerImageBase,omitempty"`
ACIConnectorImageBase string `json:"aciConnectorImageBase,omitempty"` // Deprecated
Expand Down Expand Up @@ -66,7 +66,7 @@ const (
var (
// DefaultKubernetesSpecConfig is the default Docker image source of Kubernetes
DefaultKubernetesSpecConfig = KubernetesSpecConfig{
KubernetesImageBase: "k8s.gcr.io/",
KubernetesImageBase: "registry.k8s.io/",
TillerImageBase: "mcr.microsoft.com/",
NVIDIAImageBase: "mcr.microsoft.com/",
CalicoImageBase: "mcr.microsoft.com/oss/calico/",
Expand Down
2 changes: 1 addition & 1 deletion pkg/api/defaults_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -1198,7 +1198,7 @@ func TestKubernetesImageBase(t *testing.T) {
mockCS.Location = "westus2"
cloudSpecConfig = mockCS.GetCloudSpecConfig()
properties = mockCS.Properties
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "k8s.gcr.io/"
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase = "registry.k8s.io/"
properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBaseType = ""
mockCS.setOrchestratorDefaults(true, false)
if properties.OrchestratorProfile.KubernetesConfig.KubernetesImageBase != cloudSpecConfig.KubernetesSpecConfig.MCRKubernetesImageBase {
Expand Down
Loading

0 comments on commit fb9d128

Please sign in to comment.