Skip to content

Commit 4fc145c

Browse files
committed
Place distribution of volumes across nodes + bump replicas to 4
1 parent 8eefd47 commit 4fc145c

File tree

2 files changed

+11
-11
lines changed

2 files changed

+11
-11
lines changed

SETUP.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -176,7 +176,7 @@ scp <your ssh username>@<your control node pi name>.local:~/.ssh/anbsible_id_ed2
176176
scp <your ssh username>@<your control node pi name>.local:~/.ssh/anbsible_id_ed25519 ~/ansible_id_ed25519
177177

178178
# Copy from your computer to Pis
179-
ssh-copy-id -i ~/ansible_id_ed25519.pub -f nathanthomas@node2
179+
ssh-copy-id -i ~/ansible_id_ed25519.pub -f <your username>@node2
180180
```
181181

182182
Next, we're going to use a tool called Ansible to set up remote control over all our nodes. It will effectively allow us to issue install commands or customize all our nodes at once via single commands.
@@ -263,7 +263,7 @@ ansible cube -m apt -a "name=iptables state=present" --become
263263
# Reboot
264264
ansible workers -b -m shell -a "reboot"
265265

266-
# Manually install on each node
266+
# Alternately, manually install on each node
267267
apt -y install iptables
268268
```
269269

@@ -504,10 +504,10 @@ Then, run these commands (but triple check you set the right drives above before
504504

505505
```bash
506506
# Wipe
507-
ansible workers -b -m shell -a "wipefs -a /dev/{{ var_disk }}"
507+
ansible new -b -m shell -a "wipefs -a /dev/{{ var_disk }}"
508508

509509
# Format to ext4
510-
ansible workers -b -m filesystem -a "fstype=ext4 dev=/dev/{{ var_disk }}"
510+
ansible new -b -m filesystem -a "fstype=ext4 dev=/dev/{{ var_disk }}"
511511
```
512512

513513
Afterwards, get all drives and their available sizes with this command:

helm/longhorn/charts/longhorn/values.yaml

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -158,9 +158,9 @@ persistence:
158158
# -- mkfs parameters of the default Longhorn StorageClass.
159159
defaultMkfsParams: ""
160160
# -- Replica count of the default Longhorn StorageClass.
161-
defaultClassReplicaCount: 3
161+
defaultClassReplicaCount: 4
162162
# -- Data locality of the default Longhorn StorageClass. (Options: "disabled", "best-effort")
163-
defaultDataLocality: disabled
163+
defaultDataLocality: best-effort
164164
# -- Reclaim policy that provides instructions for handling of a volume after its claim is released. (Options: "Retain", "Delete")
165165
reclaimPolicy: Delete
166166
# -- VolumeBindingMode controls when volume binding and dynamic provisioning should occur. (Options: "Immediate", "WaitForFirstConsumer") (Defaults to "Immediate")
@@ -232,9 +232,9 @@ defaultSettings:
232232
# -- Default data locality. A Longhorn volume has data locality if a local replica of the volume exists on the same node as the pod that is using the volume.
233233
defaultDataLocality: ~
234234
# -- Setting that allows scheduling on nodes with healthy replicas of the same volume. This setting is disabled by default.
235-
replicaSoftAntiAffinity: ~
235+
replicaSoftAntiAffinity: enabled
236236
# -- Setting that automatically rebalances replicas when an available node is discovered.
237-
replicaAutoBalance: ~
237+
replicaAutoBalance: best-effort
238238
# -- Percentage of storage that can be allocated relative to hard drive capacity. The default value is "100".
239239
storageOverProvisioningPercentage: ~
240240
# -- Percentage of minimum available disk capacity. When the minimum available capacity exceeds the total available capacity, the disk becomes unschedulable until more space is made available for use. The default value is "25".
@@ -246,7 +246,7 @@ defaultSettings:
246246
# -- The Upgrade Responder sends a notification whenever a new Longhorn version that you can upgrade to becomes available. The default value is https://longhorn-upgrade-responder.rancher.io/v1/checkupgrade.
247247
upgradeResponderURL: ~
248248
# -- Default number of replicas for volumes created using the Longhorn UI. For Kubernetes configuration, modify the `numberOfReplicas` field in the StorageClass. The default value is "3".
249-
defaultReplicaCount: ~
249+
defaultReplicaCount: 4
250250
# -- Default name of Longhorn static StorageClass. "storageClassName" is assigned to PVs and PVCs that are created for an existing Longhorn volume. "storageClassName" can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. "storageClassName" needs to be an existing StorageClass. The default value is "longhorn-static".
251251
defaultLonghornStaticStorageClass: ~
252252
# -- Number of minutes that Longhorn keeps a failed backup resource. When the value is "0", automatic deletion is disabled.
@@ -279,9 +279,9 @@ defaultSettings:
279279
# -- Setting that prevents Longhorn Manager from scheduling replicas on a cordoned Kubernetes node. This setting is enabled by default.
280280
disableSchedulingOnCordonedNode: ~
281281
# -- Setting that allows Longhorn to schedule new replicas of a volume to nodes in the same zone as existing healthy replicas. Nodes that do not belong to any zone are treated as existing in the zone that contains healthy replicas. When identifying zones, Longhorn relies on the label "topology.kubernetes.io/zone=<Zone name of the node>" in the Kubernetes node object.
282-
replicaZoneSoftAntiAffinity: ~
282+
replicaZoneSoftAntiAffinity: enabled
283283
# -- Setting that allows scheduling on disks with existing healthy replicas of the same volume. This setting is enabled by default.
284-
replicaDiskSoftAntiAffinity: ~
284+
replicaDiskSoftAntiAffinity: enabled
285285
# -- Policy that defines the action Longhorn takes when a volume is stuck with a StatefulSet or Deployment pod on a node that failed.
286286
nodeDownPodDeletionPolicy: ~
287287
# -- Policy that defines the action Longhorn takes when a node with the last healthy replica of a volume is drained.

0 commit comments

Comments
 (0)