Skip to content

Commit

Permalink
Control Plane Terminology - Add transition text
Browse files Browse the repository at this point in the history
  • Loading branch information
gjtempleton committed Jan 19, 2021
1 parent d872ec3 commit 4fbe142
Show file tree
Hide file tree
Showing 7 changed files with 20 additions and 13 deletions.
7 changes: 4 additions & 3 deletions cluster-autoscaler/FAQ.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Frequently Asked Questions

# Older versions
Expand Down Expand Up @@ -775,7 +776,7 @@ If both the cluster and CA appear healthy:

* If you expect some nodes to be added to make space for pending pods, but they are not added for a long time, check [I have a couple of pending pods, but there was no scale-up?](#i-have-a-couple-of-pending-pods-but-there-was-no-scale-up) section.

* If you have access to the control plane machine, check Cluster Autoscaler logs in `/var/log/cluster-autoscaler.log`. Cluster Autoscaler logs a lot of useful information, including why it considers a pod unremovable or what was its scale-up plan.
* If you have access to the control plane (previously referred to as master) machine, check Cluster Autoscaler logs in `/var/log/cluster-autoscaler.log`. Cluster Autoscaler logs a lot of useful information, including why it considers a pod unremovable or what was its scale-up plan.

* Check events added by CA to the pod object.

Expand All @@ -787,7 +788,7 @@ If both the cluster and CA appear healthy:

There are three options:

* Logs on the control plane nodes, in `/var/log/cluster-autoscaler.log`.
* Logs on the control plane (previously referred to as master) nodes, in `/var/log/cluster-autoscaler.log`.
* Cluster Autoscaler 0.5 and later publishes kube-system/cluster-autoscaler-status config map.
To see it, run `kubectl get configmap cluster-autoscaler-status -n kube-system
-o yaml`.
Expand Down Expand Up @@ -862,7 +863,7 @@ Depending on how long scale-ups have been failing, it may wait up to 30 minutes
```
This is the minimum number of nodes required for all e2e tests to pass. The tests should also pass if you set higher maximum nodes limit.
3. Run `go run hack/e2e.go -- --verbose-commands --up` to bring up your cluster.
4. SSH to the control plane node and edit `/etc/kubernetes/manifests/cluster-autoscaler.manifest` (you will need sudo for this).
4. SSH to the control plane (previously referred to as master) node and edit `/etc/kubernetes/manifests/cluster-autoscaler.manifest` (you will need sudo for this).
* If you want to test your custom changes set `image` to point at your own CA image.
* Make sure `--scale-down-enabled` parameter in `command` is set to `true`.
5. Run CA tests with:
Expand Down
5 changes: 3 additions & 2 deletions cluster-autoscaler/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler

# Introduction
Expand All @@ -24,7 +25,7 @@ You should also take a look at the notes and "gotchas" for your specific cloud p

# Releases

We recommend using Cluster Autoscaler with the Kubernetes control plane version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.
We recommend using Cluster Autoscaler with the Kubernetes control plane (previously referred to as master) version for which it was meant. The below combinations have been tested on GCP. We don't do cross version testing or compatibility testing in other environments. Some user reports indicate successful use of a newer version of Cluster Autoscaler with older clusters, however, there is always a chance that it won't work as expected.

Starting from Kubernetes 1.12, versioning scheme was changed to match Kubernetes minor releases exactly.

Expand Down Expand Up @@ -131,7 +132,7 @@ CA Version 0.3:

# Deployment

Cluster Autoscaler is designed to run on Kubernetes control plane node. This is the
Cluster Autoscaler is designed to run on Kubernetes control plane (previously referred to as master) node. This is the
default deployment strategy on GCP.
It is possible to run a customized deployment of Cluster Autoscaler on worker nodes, but extra care needs
to be taken to ensure that Cluster Autoscaler remains up and running. Users can put it into kube-system
Expand Down
3 changes: 2 additions & 1 deletion cluster-autoscaler/cloudprovider/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,8 @@ kubectl apply -f examples/cluster-autoscaler-one-asg.yaml
kubectl apply -f examples/cluster-autoscaler-multi-asg.yaml
```
## Control Plane Node Setup
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
## Control Plane (previously referred to as master) Node Setup
**NOTE**: This setup is not compatible with Amazon EKS.
Expand Down
3 changes: 2 additions & 1 deletion cluster-autoscaler/cloudprovider/azure/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,8 @@ Save the updated deployment manifest, then deploy cluster-autoscaler by running:
kubectl create -f cluster-autoscaler-vmss.yaml
```

To run a cluster autoscaler pod on a control plane node, the deployment should tolerate the `master` taint, and `nodeSelector` should be used to schedule pods. Use [cluster-autoscaler-vmss-control-plane.yaml](examples/cluster-autoscaler-vmss-control-plane.yaml) in this case.
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
To run a cluster autoscaler pod on a control plane (previously referred to as master) node, the deployment should tolerate the `master` taint, and `nodeSelector` should be used to schedule pods. Use [cluster-autoscaler-vmss-control-plane.yaml](examples/cluster-autoscaler-vmss-control-plane.yaml) in this case.

To run a cluster autoscaler pod with Azure managed service identity (MSI), use [cluster-autoscaler-vmss-msi.yaml](examples/cluster-autoscaler-vmss-msi.yaml) instead.

Expand Down
3 changes: 2 additions & 1 deletion cluster-autoscaler/cloudprovider/huaweicloud/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,8 @@ openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin
chkconfig --add /etc/rc.d/init.d/init-k8s.sh
chkconfig /etc/rc.d/init.d/init-k8s.sh on
```
- Copy `~/.kube/config` from a control plane node to this ECS `~./kube/config` to setup kubectl on this instance.
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
- Copy `~/.kube/config` from a control plane (previously referred to as master) node to this ECS `~./kube/config` to setup kubectl on this instance.
- Go to Huawei Cloud `Image Management` Service and click on `Create Image`. Select type `System disk image`, select your ECS instance as `Source`, then give it a name and then create.
Expand Down
7 changes: 4 additions & 3 deletions cluster-autoscaler/cloudprovider/magnum/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler for OpenStack Magnum
The cluster autoscaler for Magnum scales worker nodes within any
specified nodegroup. It will run as a `Deployment` in your cluster.
Expand Down Expand Up @@ -31,7 +32,7 @@ An example `ServiceAccount` is given in [examples/cluster-autoscaler-svcaccount.

The credentials for authenticating with OpenStack are stored in a secret and
mounted as a file inside the container. [examples/cluster-autoscaler-secret](examples/cluster-autoscaler-secret.yaml)
can be modified with the contents of your cloud-config. This file can be obtained from your control plane node,
can be modified with the contents of your cloud-config. This file can be obtained from your control plane (previously referred to as master) node,
in `/etc/kubernetes` (may be named `kube_openstack_config` instead of `cloud-config`).

## Autoscaler deployment
Expand Down Expand Up @@ -65,7 +66,7 @@ autoscalingGroups:
cloudConfigPath: "/etc/kubernetes/cloud-config"
```
For running on the control plane node and other suggested settings, see
For running on the control plane (previously referred to as master) node and other suggested settings, see
[examples/values-example.yaml](examples/values-example.yaml).
To deploy with node group autodiscovery (for cluster autoscaler v1.19+), see
[examples/values-autodiscovery.yaml](examples/values-autodiscovery.yaml).
Expand Down Expand Up @@ -119,7 +120,7 @@ If you are deploying the autoscaler into a cluster which already has more than o
it is best to deploy it onto any node which already has non-default kube-system pods,
to minimise the number of nodes which cannot be removed when scaling.
Or, if you are using a Magnum version which supports scheduling on the control plane node, then
Or, if you are using a Magnum version which supports scheduling on the control plane (previously referred to as master) node, then
the example deployment file
[examples/cluster-autoscaler-deployment-master.yaml](examples/cluster-autoscaler-deployment-control-plane.yaml)
can be used.
5 changes: 3 additions & 2 deletions cluster-autoscaler/cloudprovider/packet/README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
<!--TODO: Remove "previously referred to as master" references from this doc once this terminology is fully removed from k8s-->
# Cluster Autoscaler for Packet

The cluster autoscaler for [Packet](https://packet.com) worker nodes performs
Expand Down Expand Up @@ -86,7 +87,7 @@ If you are deploying the autoscaler into a cluster which already has more than o
it is best to deploy it onto any node which already has non-default kube-system pods,
to minimise the number of nodes which cannot be removed when scaling. For this reason in
the provided example the autoscaler pod has a nodeaffinity which forces it to deploy on
the control plane node.
the control plane (previously referred to as master) node.

### Changes

Expand All @@ -98,4 +99,4 @@ the control plane node.

4. Cloud inits in the examples have pinned versions for Kubernetes in order to minimize potential incompatibilities as a result of nodes provisioned with different Kubernetes versions.

5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the control plane node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.
5. In the provided cluster-autoscaler deployment example, the autoscaler pod has a nodeaffinity which forces it to deploy on the control plane (previously referred to as master) node, so that the cluster-autoscaler can scale down all of the worker nodes. Without this change there was a possibility for the cluster-autoscaler to be deployed on a worker node that could not be downscaled.

0 comments on commit 4fbe142

Please sign in to comment.