Open
Description
What happens:
Volume created with disklessOnRemaing
set to true
is created with 2 data nodes as expected but only 1 diskless node, which isn't suffisent for quorum in a 10 nodes cluster.
Version:
linstor-csi-plugin quay.io/piraeusdatastore/piraeus-csi:v0.20.0
Storage pools:
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ k8s-master ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-11 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-12 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-13 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-3 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-4 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-5 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-6 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ k8s-worker-7 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ piraeus-data ┊ k8s-worker-1 ┊ LVM ┊ piraeus-data ┊ 778.84 GiB ┊ 1.10 TiB ┊ False ┊ Ok ┊ ┊
┊ piraeus-data ┊ k8s-worker-11 ┊ LVM ┊ piraeus-data ┊ 2.30 TiB ┊ 3.49 TiB ┊ False ┊ Ok ┊ ┊
┊ piraeus-data ┊ k8s-worker-12 ┊ LVM ┊ piraeus-data ┊ 2.30 TiB ┊ 3.49 TiB ┊ False ┊ Ok ┊ ┊
┊ piraeus-data ┊ k8s-worker-13 ┊ LVM ┊ piraeus-data ┊ 2.49 TiB ┊ 3.49 TiB ┊ False ┊ Ok ┊ ┊
┊ piraeus-data ┊ k8s-worker-3 ┊ LVM ┊ piraeus-data ┊ 778.84 GiB ┊ 1.10 TiB ┊ False ┊ Ok ┊ ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
StorageClass:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: piraeus-piraeus-data
parameters:
autoPlace: "2"
csi.storage.k8s.io/fstype: ext4
disklessOnRemaining: "true"
linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
linstor.csi.linbit.com/placementCount: "2"
property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
storagePool: piraeus-data
provisioner: linstor.csi.linbit.com
reclaimPolicy: Retain
volumeBindingMode: Immediate
This issue is related to piraeusdatastore/piraeus-ha-controller#23 which makes HA-controller think that nodes are in trouble while they aren't, and make them unschedulable in K8s.
The trick was to manually force quorum
of the volume to 2
(number of data nodes), which is not what is expected.
Metadata
Assignees
Labels
No labels
Activity