Skip to content

Can not place PV with two requisite nodes and replicas on same nodes #195

Open
deckhouse/deckhouse
#4515
@kvaps

Description

Hi @WanzenBug I'm bit confused about requisite nodes. And why their number can be more than one:

Here is my storageclass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linstor-data-r1
parameters:
  linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
  linstor.csi.linbit.com/placementCount: "1"
  linstor.csi.linbit.com/replicasOnSame: Aux/zone
  linstor.csi.linbit.com/storagePool: data
  property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
  property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
  property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
  property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
provisioner: linstor.csi.linbit.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

Then I try to provision PVC for KubeVirt using CDI with HonorWaitForFirstConsumer feature enabled,
I see that it is created pvc, and the following error:

# kubectl describe pvc disk-tfprod-frontend-b-0-1-boot
Name:          disk-tfprod-frontend-b-0-1-boot
Namespace:     tfprod
StorageClass:  linstor-data-r2
Status:        Pending
Volume:
Labels:        app=containerized-data-importer
               app.kubernetes.io/component=storage
               app.kubernetes.io/managed-by=cdi-controller
Annotations:   cdi.kubevirt.io/storage.contentType: kubevirt
               cdi.kubevirt.io/storage.deleteAfterCompletion: true
               cdi.kubevirt.io/storage.import.endpoint:
                 docker://registry.deckhouse.io/deckhouse/fe@sha256:d5aba3593a2f441ea24f4dce56706efc43e3ec5a6abbadd753a074accc043779
               cdi.kubevirt.io/storage.import.registryImportMethod: node
               cdi.kubevirt.io/storage.import.source: registry
               cdi.kubevirt.io/storage.pod.restarts: 0
               cdi.kubevirt.io/storage.preallocation.requested: false
               volume.beta.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
               volume.kubernetes.io/selected-node: b-hv-3
               volume.kubernetes.io/storage-provisioner: linstor.csi.linbit.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Block
Used By:       virt-launcher-tfprod-frontend-b-0-1-tk8cc
Events:
  Type     Reason               Age                   From                                                                                                 Message
  ----     ------               ----                  ----                                                                                                 -------
  Normal   WaitForPodScheduled  17m (x2 over 18m)     persistentvolume-controller                                                                          waiting for pod virt-launcher-tfprod-frontend-b-0-1-tk8cc to be scheduled
  Normal   Provisioning         3m19s (x12 over 17m)  linstor.csi.linbit.com_linstor-csi-controller-7455cd496b-njplx_0c477d26-4f4e-4083-a07d-bad13a40acd0  External provisioner is provisioning volume for claim "tfprod/disk-tfprod-frontend-b-0-1-boot"
  Warning  ProvisioningFailed   3m13s (x12 over 17m)  linstor.csi.linbit.com_linstor-csi-controller-7455cd496b-njplx_0c477d26-4f4e-4083-a07d-bad13a40acd0  failed to provision volume with StorageClass "linstor-data-r2": rpc error: code = Internal desc = CreateVolume failed for pvc-3f11662c-4c0a-4195-a79b-64a04e482bce: rpc error: code = ResourceExhausted desc = failed to enough replicas on requisite nodes: Message: 'Not enough available nodes'; Details: 'Not enough nodes fulfilling the following auto-place criteria:
 * has a deployed storage pool named TransactionList [data]
 * the storage pools have to have at least '52428800' free space
 * the current access context has enough privileges to use the node and the storage pool
 * the node is online

Auto-place configuration details:
  Place Count: 2
  Replicas on same nodes: TransactionList [Aux/zone]
  Don't place with resource (List): [pvc-3f11662c-4c0a-4195-a79b-64a04e482bce]
  Node name: [b-hv-3, c-hv-5]
  Storage pool name: TransactionList [data]
  Layer stack: TransactionList [DRBD, STORAGE]

Auto-placing resource: pvc-3f11662c-4c0a-4195-a79b-64a04e482bce'
  Normal  ExternalProvisioning  2m56s (x62 over 17m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "linstor.csi.linbit.com" or manually created by system administrator

Despite the fact the PVC has already assigned volume.kubernetes.io/selected-node: b-hv-3 why it tries to add c-hv-5 into node list?

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions