- Overview
- Available dynamically provisioned plug-ins
- Defining a StorageClass
- Basic StorageClass object definition
- StorageClass annotations
- OpenStack Cinder object definition
- AWS ElasticBlockStore (EBS) object definition
- GCE PersistentDisk (gcePD) object definition
- GlusterFS object definition
- Ceph RBD object definition
- Trident object definition
- VMware vSphere object definition
- Azure File object definition
- Azure Disk object definition
- HPE Nimble Storage Object Definition
- Changing the default StorageClass
- Additional information and examples
The StorageClass resource object describes and classifies storage that can be
requested, as well as provides a means for passing parameters for
dynamically provisioned storage on demand. StorageClass objects can also serve as
a management mechanism for controlling different levels of storage and access
to the storage. Cluster Administrators (cluster-admin
) or Storage
Administrators (storage-admin
) define and create the StorageClass objects
that users can request without needing any intimate knowledge about the
underlying storage volume sources.
The {product-title} persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in {product-title}. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.
Note
|
To enable dynamic provisioning, add the
|
{product-title} provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:
Storage Type | Provisioner Plug-in Name | Required Configuration | Notes |
---|---|---|---|
OpenStack Cinder |
|
||
AWS Elastic Block Store (EBS) |
|
For dynamic provisioning when using multiple clusters in different zones, tag each
node with |
|
GCE Persistent Disk (gcePD) |
|
In multi-zone configurations, it is advisable to run one Openshift cluster per GCE project to avoid PVs from getting created in zones where no node from current cluster exists. |
|
GlusterFS |
|
||
Ceph RBD |
|
||
Trident from NetApp |
|
Storage orchestrator for NetApp ONTAP, SolidFire, and E-Series storage. |
|
|
|||
Azure Disk |
|
||
HPE Nimble Storage |
|
Dynamic provisioning of HPE Nimble Storage resources using the HPE Nimble Kube Storage Controller. |
Important
|
Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. |
StorageClass objects are currently a globally scoped object and need to be
created by cluster-admin
or storage-admin
users.
Note
|
For GCE and AWS, a default StorageClass is created during {product-title} installation. You can change the default StorageClass or delete it. |
There are currently six plug-ins that are supported. The following sections describe the basic object definition for a StorageClass and specific examples for each of the supported plug-in types.
kind: StorageClass (1)
apiVersion: storage.k8s.io/v1 (2)
metadata:
name: foo (3)
annotations: (4)
...
provisioner: kubernetes.io/plug-in-type (5)
parameters: (6)
param1: value
...
paramN: value
-
(required) The API object type.
-
(required) The current apiVersion.
-
(required) The name of the StorageClass.
-
(optional) Annotations for the StorageClass
-
(required) The type of provisioner associated with this storage class.
-
(optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in.
To set a StorageClass as the cluster-wide default:
storageclass.kubernetes.io/is-default-class: "true"
This enables any Persistent Volume Claim (PVC) that does not specify a specific volume to automatically be provisioned through the default StorageClass
Note
|
Beta annotation |
To set a StorageClass description:
kubernetes.io/description: My StorageClass Description
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gold
provisioner: kubernetes.io/cinder
parameters:
type: fast (1)
availability: nova (2)
fsType: ext4 (3)
-
Volume type created in Cinder. Default is empty.
-
Availability Zone. If not specified, volumes are generally round-robined across all active zones where the {product-title} cluster has a node.
-
File system that is created on dynamically provisioned volumes. This value is copied to the
fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4
.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1 (1)
zone: us-east-1d (2)
iopsPerGB: "10" (3)
encrypted: "true" (4)
kmsKeyId: keyvalue (5)
fsType: ext4 (6)
-
Select from
io1
,gp2
,sc1
,st1
. The default isgp2
. See AWS documentation for valid Amazon Resource Name (ARN) values. -
AWS zone. If no zone is specified, volumes are generally round-robined across all active zones where the {product-title} cluster has a node. Zone and zones parameters must not be used at the same time.
-
Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See AWS documentation for further details.
-
Denotes whether to encrypt the EBS volume. Valid values are
true
orfalse
. -
Optional. The full ARN of the key to use when encrypting the volume. If none is supplied, but
encypted
is set totrue
, then AWS generates a key. See AWS documentation for a valid ARN value. -
File system that is created on dynamically provisioned volumes. This value is copied to the
fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4
.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard (1)
zone: us-central1-a (2)
zones: us-central1-a, us-central1-b, us-east1-b (3)
fsType: ext4 (4)
-
Select either
pd-standard
orpd-ssd
. The default ispd-ssd
. -
GCE zone. If no zone is specified, volumes are generally round-robined across all active zones where the {product-title} cluster has a node. Zone and zones parameters must not be used at the same time.
-
A comma-separated list of GCE zone(s). If no zone is specified, volumes are generally round-robined across all active zones where the {product-title} cluster has a node. Zone and zones parameters must not be used at the same time.
-
File system that is created on dynamically provisioned volumes. This value is copied to the
fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4
.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.16.153.105:6789 (1)
adminId: admin (2)
adminSecretName: ceph-secret (3)
adminSecretNamespace: kube-system (4)
pool: kube (5)
userId: kube (6)
userSecretName: ceph-secret-user (7)
fsType: ext4 (8)
-
Ceph monitors, comma-delimited. It is required.
-
Ceph client ID that is capable of creating images in the pool. Default is "admin".
-
Secret Name for
adminId
. It is required. The provided secret must have type "kubernetes.io/rbd". -
The namespace for
adminSecret
. Default is "default". -
Ceph RBD pool. Default is "rbd".
-
Ceph client ID that is used to map the Ceph RBD image. Default is the same as
adminId
. -
The name of Ceph Secret for
userId
to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required. -
File system that is created on dynamically provisioned volumes. This value is copied to the
fsType
field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value isext4
.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gold
provisioner: netapp.io/trident (1)
parameters: (2)
media: "ssd"
provisioningType: "thin"
snapshots: "true"
Trident uses the parameters as selection criteria for the different pools of storage that are registered with it. Trident itself is configured separately.
-
For more information about installing Trident with {product-title}, see the Trident documentation.
-
For more information about supported parameters, see the storage attributes section of the Trident documentation.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume (1)
parameters:
diskformat: thin (2)
-
For more information about using VMware vSphere with {product-title}, see the VMware vSphere documentation.
-
diskformat
:thin
,zeroedthick
andeagerzeroedthick
. See vSphere docs for details. Default:thin
To configure Azure file dynamic provisioning:
-
Create the role in the user’s project:
$ cat azf-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: system:controller:persistent-volume-binder namespace: <user's project name> rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"]
-
Create the role binding to the
persistent-volume-binder
service account in thekube-system
project:$ cat azf-rolebind.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: system:controller:persistent-volume-binder namespace: <user's project> roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: system:controller:persistent-volume-binder subjects: - kind: ServiceAccount name: persistent-volume-binder namespace: kube-system
-
Add the service account as
admin
to the user’s project:$ oc policy add-role-to-user admin system:serviceaccount:kube-system:persistent-volume-binder -n <user's project>
-
Create a storage class for the Azure file:
$ cat azfsc.yaml | oc create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azfsc provisioner: kubernetes.io/azure-file mountOptions: - dir_mode=0777 - file_mode=0777
The user can now create a PVC that uses this storage class.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/azure-disk
parameters:
storageAccount: azure_storage_account_name (1)
storageaccounttype: Standard_LRS (2)
kind: Dedicated (3)
-
Azure storage account name. This must reside in the same resource group as the cluster. If a storage account is specified, the
location
is ignored. If a storage account is not specified, a new storage account gets created in the same resource group as the cluster. If you are specifying astorageAccount
, the value forkind
must beDedicated
. -
Azure storage account SKU tier. Default is empty. Note: Premium VM can attach both Standard_LRS and Premium_LRS disks, Standard VM can only attach Standard_LRS disks, Managed VM can only attach managed disks, and unmanaged VM can only attach unmanaged disks.
-
Possible values are
Shared
(default),Dedicated
, andManaged
.-
If
kind
is set toShared
, Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. -
If
kind
is set toManaged
, Azure creates new managed disks. -
If
kind
is set toDedicated
and astorageAccount
is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work:-
The specified storage account must be in the same region.
-
Azure Cloud Provider must have a write access to the storage account.
-
-
If
kind
is set toDedicated
and astorageAccount
is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster.
-
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: transactionaldb
provisioner: hpe.com/nimble (1)
parameters: (2)
description: "Persistent Volume provisioned from the transactionaldb StorageClass"
pool: "allflash"
folder: "Production"
protectionTemplate: "Local48h-Cloud90d"
perfPolicy: "SQL Server"
limitIOPS: "10000"
dedupe: "true"
fsMode: "0770"
-
To specify this provisioner, you must first install and configure the HPE Nimble Kube Storage Controller for {product-title}. See HPE Nimble Storage Integration Guide for Red Hat OpenShift and OKD.
-
For a complete list of parameters the dynamic provisioner and the FlexVolume driver support, see Supported StorageClass parameters.
Note
|
The HPE Nimble Kube Storage Controller relies on a FlexVolume driver that is included in the HPE Nimble Storage Linux Toolkit, which HPE Nimble Storage customers and partners can obtain at HPE InfoSight. For a brief overview, see HPE Nimble Storage on the Primed Partners page. |
If you are using GCE and AWS, use the following process to change the default StorageClass:
-
List the StorageClass:
$ oc get storageclass NAME TYPE gp2 (default) kubernetes.io/aws-ebs (1) standard kubernetes.io/gce-pd
-
(default)
denotes the default StorageClass.
-
-
Change the value of the annotation
storageclass.kubernetes.io/is-default-class
tofalse
for the default StorageClass:$ oc patch storageclass gp2 -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "false"}}}'
-
Make another StorageClass the default by adding or modifying the annotation as
storageclass.kubernetes.io/is-default-class=true
.$ oc patch storageclass standard -p '{"metadata": {"annotations": \ {"storageclass.kubernetes.io/is-default-class": "true"}}}'
-
Verify the changes:
$ oc get storageclass NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/gce-pd