Skip to content
This repository was archived by the owner on Aug 12, 2025. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ To generate your cluster yaml:
* `SSH_KEY` - The path to an ssh public key to place on all of the machines. If not set, it will use whichever ssh keys are defined for your project.
* `POD_CIDR` - The CIDR to use for your pods; if not set, see defaults below
* `SERVICE_CIDR` - The CIDR to use for your services; if not set, see defaults below
* `MASTER_NODE_TYPE` - The Packet node type to use for control plane nodes; if not set, see defaults below
* `CONTROLPLANE_NODE_TYPE` - The Packet node type to use for control plane nodes; if not set, see defaults below
* `WORKER_NODE_TYPE` - The Packet node type to use for worker nodes; if not set, see defaults below
* `WORKER_MACHINE_COUNT` - The number of worker machines to deploy; if not set, cluster-api itself (not the Packet implementation) defaults to 0 workers.
1. Run the cluster generation command:
Expand Down Expand Up @@ -100,9 +100,9 @@ If you do not change the generated `yaml` files, it will use defaults. You can l
* pod CIDR: `172.26.0.0/16`
* service domain: `cluster.local`
* cluster name: `test1-<random>`, where random is a random 5-character string containing the characters `a-z0-9`
* master node type: `t1.small`
* control plane node type: `t1.small`
* worker node type: `t1.small`
* worker and master OS type: `ubuntu_18_04`
* worker and control plane OS type: `ubuntu_18_04`

#### Apply Your Cluster

Expand All @@ -128,7 +128,7 @@ The actual machines are deployed using `kubeadm`. The deployment process uses th

1. When a new `Cluster` is created:
* if the appropriate `Secret` does not include a CA key/certificate pair, create one and save it in that `Secret`
2. When a new master `Machine` is created:
2. When a new control plane `Machine` is created:
* retrieve the CA certificate and key from the appropriate Kubernetes `Secret`
* launch a new server instance on Packet
* set the `cloud-init` on the instance to run `kubeadm init`, passing it the CA certificate and key
Expand Down Expand Up @@ -156,7 +156,7 @@ When trying to install a new machine, the logic is as follows:
Important notes:

* There can be multiple `machineParams` entries for each `userdata`, enabling one userdata script to be used for more than one combination of OS and Kubernetes versions.
* There are versions both for `controlPlane` and `kubelet`. `master` servers will match both `controlPlane` and `kubelet`; worker nodes will have no `controlPlane` entry.
* There are versions both for `controlPlane` and `kubelet`. `control plane` servers will match both `controlPlane` and `kubelet`; worker nodes will have no `controlPlane` entry.
* The `containerRuntime` is installed as is. The value of `containerRuntime` will be passed to the userdata script as `${CR_PACKAGE}`, to be installed as desired.

## References
Expand Down
4 changes: 2 additions & 2 deletions api/v1alpha3/tags.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,6 @@ limitations under the License.
package v1alpha3

const (
MasterTag = "kubernetes.io/role:master"
WorkerTag = "kubernetes.io/role:node"
ControlPlaneTag = "kubernetes.io/role:master"
WorkerTag = "kubernetes.io/role:node"
)
4 changes: 2 additions & 2 deletions docs/concepts/machine.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This is an example of it:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: PacketMachine
metadata:
name: "qa-master-0"
name: "qa-controlplane-0"
spec:
OS: "ubuntu_18_04"
billingCycle: hourly
Expand Down Expand Up @@ -42,7 +42,7 @@ You can specify the reservation ID using the field `hardwareReservationID`:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: PacketMachine
metadata:
name: "qa-master-0"
name: "qa-controlplane-0"
spec:
OS: "ubuntu_18_04"
facility:
Expand Down
2 changes: 1 addition & 1 deletion pkg/cloud/packet/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ func (p *PacketClient) NewDevice(machineScope *scope.MachineScope, extraTags []s
return nil, fmt.Errorf("error executing control-plane userdata template: %v", err)
}
userData = stringWriter.String()
tags = append(tags, infrastructurev1alpha3.MasterTag)
tags = append(tags, infrastructurev1alpha3.ControlPlaneTag)
} else {
tags = append(tags, infrastructurev1alpha3.WorkerTag)
}
Expand Down
2 changes: 1 addition & 1 deletion pkg/cloud/packet/scope/machine.go
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ func (m *MachineScope) IsControlPlane() bool {
// Role returns the machine role from the labels.
func (m *MachineScope) Role() string {
if util.IsControlPlaneMachine(m.Machine) {
return infrav1.MasterTag
return infrav1.ControlPlaneTag
}
return infrav1.WorkerTag
}
Expand Down
6 changes: 3 additions & 3 deletions scripts/generate-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ TEMPLATE_OUT=./out/cluster.yaml
DEFAULT_KUBERNETES_VERSION="v1.18.2"
DEFAULT_POD_CIDR="172.25.0.0/16"
DEFAULT_SERVICE_CIDR="172.26.0.0/16"
DEFAULT_MASTER_NODE_TYPE="t1.small"
DEFAULT_CONTROLPLANE_NODE_TYPE="t1.small"
DEFAULT_WORKER_NODE_TYPE="t1.small"
DEFAULT_NODE_OS="ubuntu_18_04"
DEFAULT_WORKER_MACHINE_COUNT=3
Expand Down Expand Up @@ -62,7 +62,7 @@ CLUSTER_NAME=${CLUSTER_NAME:-${DEFAULT_CLUSTER_NAME}}
POD_CIDR=${POD_CIDR:-${DEFAULT_POD_CIDR}}
SERVICE_CIDR=${SERVICE_CIDR:-${DEFAULT_SERVICE_CIDR}}
WORKER_NODE_TYPE=${WORKER_NODE_TYPE:-${DEFAULT_WORKER_NODE_TYPE}}
MASTER_NODE_TYPE=${MASTER_NODE_TYPE:-${DEFAULT_MASTER_NODE_TYPE}}
CONTROLPLANE_NODE_TYPE=${CONTROLPLANE_NODE_TYPE:-${DEFAULT_CONTROLPLANE_NODE_TYPE}}
WORKER_MACHINE_COUNT=${WORKER_MACHINE_COUNT:-${DEFAULT_WORKER_MACHINE_COUNT}}
CONTROL_PLANE_MACHINE_COUNT=${CONTROL_PLANE_MACHINE_COUNT:-${DEFAULT_CONTROL_PLANE_MACHINE_COUNT}}
NODE_OS=${NODE_OS:-${DEFAULT_NODE_OS}}
Expand All @@ -73,7 +73,7 @@ PROJECT_ID=${PACKET_PROJECT_ID}
FACILITY=${PACKET_FACILITY}

# and now export them all so envsubst can use them
export PROJECT_ID FACILITY NODE_OS WORKER_NODE_TYPE MASTER_NODE_TYPE POD_CIDR SERVICE_CIDR SSH_KEY KUBERNETES_VERSION WORKER_MACHINE_COUNT CONTROL_PLANE_MACHINE_COUNT
export PROJECT_ID FACILITY NODE_OS WORKER_NODE_TYPE CONTROLPLANE_NODE_TYPE POD_CIDR SERVICE_CIDR SSH_KEY KUBERNETES_VERSION WORKER_MACHINE_COUNT CONTROL_PLANE_MACHINE_COUNT
${CLUSTERCTL} ${CONFIG_OPT} config cluster ${CLUSTER_NAME} --from file://$PWD/templates/cluster-template.yaml > $TEMPLATE_OUT

echo "Done! See output file at ${TEMPLATE_OUT}. Run:"
Expand Down