Skip to content

Commit

Permalink
Splits LVMS content into modules
Browse files Browse the repository at this point in the history
  • Loading branch information
Steven Smith authored and openshift-cherrypick-robot committed Dec 20, 2022
1 parent 90a2eb5 commit fa6d9c5
Show file tree
Hide file tree
Showing 16 changed files with 174 additions and 143 deletions.
Original file line number Diff line number Diff line change
@@ -1,147 +1,22 @@
:_content-type: ASSEMBLY
[id="microshift-storage-plugin-overview"]
= MicroShift storage plug-in overview
= MicroShift storage plugin overview
include::_attributes/common-attributes.adoc[]
:context: microshift-storage-plugin-overview

toc::[]

{product-title} enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plug-in for managing LVM volumes for Kubernetes.
{product-title} enables dynamic storage provisioning that is ready for immediate use with the logical volume manager storage (LVMS) Container Storage Interface (CSI) provider. The LVMS plugin is the Red Hat downstream version of TopoLVM, a CSI plugin for managing LVM volumes for Kubernetes.

LVMS provisions new logical volume management (LVM) logical volumes (LVs) for container workloads with appropriately configured persistent volume claims (PVC). Each PVC references a storage class that represents an LVM Volume Group (VG) on the host node. LVs are only provisioned for scheduled pods.

[id="lvms-deployment"]
== LVMS Deployment

LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.
include::modules/microshift-lvms-deployment.adoc[leveloffset=+1]

LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, see link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
include::modules/microshift-lvms-configuring.adoc[leveloffset=+1]

[id="lvms-configuring"]
== Configuring the LVMS
include::modules/microshift-setting-lvms-path.adoc[leveloffset=+2]

{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.

The following `config.yaml` file shows a basic LVMS configuration:

.LVMS YAML configuration
[source,yaml]
----
socket-name: <1>
device-classes: <2>
- name: <3>
volume-group: <4>
spare-gb: <5>
default: <6>
- name: hdd
volume-group: hdd-vg
spare-gb: 10
- name: striped
volume-group: multi-pv-vg
spare-gb: 10
stripe: <7>
stripe-size: <8>
- name: raid
volume-group: raid-vg
lvcreate-options: <9>
- --type=raid1
----
<1> String. The UNIX domain socket endpoint of gRPC. Defaults to `/run/topolvm/lvmd.sock`.
<2> `map[string]DeviceClass`. The `device-class` settings.
<3> String. The name of the `device-class`.
<4> String. The group where the `device-class` creates the logical volumes.
<5> unit64. Storage capacity in GiB to be spared. Defaults to `10`.
<6> Boolean. Indicates that the `device-class` is used by default. Defaults to `false`.
<7> unit. The number of stripes in the logical volume.
<8> String. The amount of data that is written to one device before moving to the next device.
<9> String. Extra arguments to pas `lvcreate`, for example, `[--type=raid1"`].
+
[NOTE]
====
Striping can be configured by using the dedicated options (`stripe` and `stripe-size`) and `lvcreate-options`. Either option can be used, but they cannot be used together. Using `stripe` and `stripe-size` with `lvcreate-options` leads to duplicate arguments to `lvcreate`. You should never set `lvcreate-options: ["--stripes=n"]` and `stripe: n` at the same time. You can, however, use both, when `lvcreate-options` is not used for striping. For example:
[source,yaml]
----
stripe: 2
lvcreate-options: ["--mirrors=1"]
----
====

[id="setting-lvms-path"]
=== Setting the LVMS path

The `config.yaml` file for the LMVS should be written to the same directory as the MicroShift `config.yaml` file. If a MicroShift `config.yaml` file does not exist, MicroShift will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs MicroShift:

.LVMS paths
[options="header",cols="1,3"]
|===
| MicroShift user | Configuration directory
|Global administrator | `/etc/microshift/lvmd.yaml`
|===

[id="lvms-system-requirements"]
== LVMS system requirements

{product-title}'s LVMS requires the following system specifications.

[id="lvms-volume-group-name"]
=== Volume Group Name

The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.

[id="lvms-volume-size-increments"]
=== Volume size increments

The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:

[source,terminal]
----
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
----

[id=using-lvms]
== Using the LVMS

The LVMS `StorageClass` is deployed with a default `StorageClass`. Any `PersistentVolumeClaim` objects without a `.spec.storageClassName` defined automatically has a `PersistentVolume` provisioned from the default `StorageClass`.

Use the following procedure to provision and mount a logical volume to a pod.

.Procedure

The following example demonstrates how to provision and mount a logical volume to a pod.

[source,terminal]
----
$ cat <<'EOF' | oc apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-lv-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
command: ["/usr/bin/sh", "-c"]
args: ["sleep", "1h"]
volumeMounts:
- mountPath: /mnt
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-lv-pvc
EOF
----
include::modules/microshift-lvms-system-requirements.adoc[leveloffset=+1]

include::modules/microshift-using-lvms.adoc[leveloffset=+1]
2 changes: 1 addition & 1 deletion migrating_from_ocp_3_to_4/planning-migration-3-4.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ For more information, see xref:../storage/persistent_storage/persistent-storage-
[discrete]
==== FlexVolume persistent storage

The FlexVolume plug-in location changed from {product-title} 3.11. The new location in {product-title} {product-version} is `/etc/kubernetes/kubelet-plugins/volume/exec`. Attachable FlexVolume plug-ins are no longer supported.
The FlexVolume plugin location changed from {product-title} 3.11. The new location in {product-title} {product-version} is `/etc/kubernetes/kubelet-plugins/volume/exec`. Attachable FlexVolume plugins are no longer supported.

For more information, see xref:../storage/persistent_storage/persistent-storage-flexvolume.adoc#persistent-storage-using-flexvolume[Persistent storage using FlexVolume].

Expand Down
2 changes: 1 addition & 1 deletion modules/dynamic-provisioning-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ having any knowledge of the underlying infrastructure.
Many storage types are available for use as persistent volumes in
{product-title}. While all of them can be statically provisioned by an
administrator, some types of storage are created dynamically using the
built-in provider and plug-in APIs.
built-in provider and plugin APIs.
2 changes: 1 addition & 1 deletion modules/dynamic-provisioning-defining-storage-class.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ storage class.
endif::microshift[]

The following sections describe the basic definition for a
`StorageClass` object and specific examples for each of the supported plug-in types.
`StorageClass` object and specific examples for each of the supported plugin types.
2 changes: 1 addition & 1 deletion modules/dynamic-provisioning-storage-class-definition.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ parameters: <6>
<4> (optional) Annotations for the storage class.
<5> (required) The type of provisioner associated with this storage class.
<6> (optional) The parameters required for the specific provisioner, this
will change from plug-in to plug-in.
will change from plugin to plug-iin.
54 changes: 54 additions & 0 deletions modules/microshift-lvms-configuring.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-storage-plugin-overview.adoc

:_content-type: CONCEPT
[id="lvms-configuring"]
= Configuring the LVMS

{product-title} supports passing through a user's LVMS configuration and allows users to specify custom volume groups, thin volume provisioning parameters, and reserved unallocated volume group space. The LVMS configuration file can be edited at any time. You must restart {product-title} to deploy configuration changes.

The following `config.yaml` file shows a basic LVMS configuration:

.LVMS YAML configuration
[source,yaml]
----
socket-name: <1>
device-classes: <2>
- name: <3>
volume-group: <4>
spare-gb: <5>
default: <6>
- name: hdd
volume-group: hdd-vg
spare-gb: 10
- name: striped
volume-group: multi-pv-vg
spare-gb: 10
stripe: <7>
stripe-size: <8>
- name: raid
volume-group: raid-vg
lvcreate-options: <9>
- --type=raid1
----
<1> String. The UNIX domain socket endpoint of gRPC. Defaults to `/run/topolvm/lvmd.sock`.
<2> `map[string]DeviceClass`. The `device-class` settings.
<3> String. The name of the `device-class`.
<4> String. The group where the `device-class` creates the logical volumes.
<5> unit64. Storage capacity in GiB to be spared. Defaults to `10`.
<6> Boolean. Indicates that the `device-class` is used by default. Defaults to `false`.
<7> unit. The number of stripes in the logical volume.
<8> String. The amount of data that is written to one device before moving to the next device.
<9> String. Extra arguments to pas `lvcreate`, for example, `[--type=raid1"`].
+
[NOTE]
====
Striping can be configured by using the dedicated options (`stripe` and `stripe-size`) and `lvcreate-options`. Either option can be used, but they cannot be used together. Using `stripe` and `stripe-size` with `lvcreate-options` leads to duplicate arguments to `lvcreate`. You should never set `lvcreate-options: ["--stripes=n"]` and `stripe: n` at the same time. You can, however, use both, when `lvcreate-options` is not used for striping. For example:

[source,yaml]
----
stripe: 2
lvcreate-options: ["--mirrors=1"]
----
====
11 changes: 11 additions & 0 deletions modules/microshift-lvms-deployment.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-storage-plugin-overview.adoc

:_content-type: CONCEPT
[id="lvms-deployment"]
= LVMS Deployment

LVMS is automatically deployed on to the cluster in the `openshift-storage` namespace after {product-title} boots.

LVMS uses `StorageCapacity` tracking to ensure that pods with an LVMS PVC are not scheduled if the requested storage is greater than the volume group's remaining free storage. For more information about `StorageCapacity` tracking, see link:https://kubernetes.io/docs/concepts/storage/storage-capacity/[Storage Capacity].
25 changes: 25 additions & 0 deletions modules/microshift-lvms-system-requirements.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-storage-plugin-overview.adoc

:_content-type: CONCEPT
[id="lvms-system-requirements"]
= LVMS system requirements

{product-title}'s LVMS requires the following system specifications.
[id="lvms-volume-group-name"]
== Volume Group Name
The default integration of LVMS assumes a volume group named `rhel`. Prior to launching, the `lvmd.yaml` configuration file must specify an existing volume group on the node with sufficient capacity for workload storage. If the volume group does not exist, the node controller will fail to start and enter a `CrashLoopBackoff` state.
[id="lvms-volume-size-increments"]
== Volume size increments
The LVMS provisions storage in increments of 1 GB. Storage requests are rounded up to the nearest gigabyte (GB). When a volume group's capacity is less than 1 GB, the `PersistentVolumeClaim` registers a `ProvisioningFailed` event, for example:
[source,terminal]
----
Warning ProvisioningFailed 3s (x2 over 5s) topolvm.cybozu.com_topolvm-controller-858c78d96c-xttzp_0fa83aef-2070-4ae2-bcb9-163f818dcd9f failed to provision volume with
StorageClass "topolvm-provisioner": rpc error: code = ResourceExhausted desc = no enough space left on VG: free=(BYTES_INT), requested=(BYTES_INT)
----
16 changes: 16 additions & 0 deletions modules/microshift-setting-lvms-path.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-storage-plugin-overview.adoc

:_content-type: CONCEPT
[id="setting-lvms-path"]
= Setting the LVMS path

The `config.yaml` file for the LMVS should be written to the same directory as the MicroShift `config.yaml` file. If a MicroShift `config.yaml` file does not exist, MicroShift will create an LVMS YAML and automatically populate the configuration fields with the default settings. The following paths are checked for the `config.yaml` file, depending on which user runs MicroShift:

.LVMS paths
[options="header",cols="1,3"]
|===
| MicroShift user | Configuration directory
|Global administrator | `/etc/microshift/lvmd.yaml`
|===
50 changes: 50 additions & 0 deletions modules/microshift-using-lvms.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
// Module included in the following assemblies:
//
// * microshift_networking/microshift-storage-plugin-overview.adoc

:_content-type: CONCEPT
[id="using-lvms"]
= Using the LVMS

The LVMS `StorageClass` is deployed with a default `StorageClass`. Any `PersistentVolumeClaim` objects without a `.spec.storageClassName` defined automatically has a `PersistentVolume` provisioned from the default `StorageClass`.

Use the following procedure to provision and mount a logical volume to a pod.

.Procedure

* Enter the following command to provision and mount a logical volume to a pod:
+
[source,terminal]
----
$ cat <<'EOF' | oc apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-lv-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1G
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
command: ["/usr/bin/sh", "-c"]
args: ["sleep", "1h"]
volumeMounts:
- mountPath: /mnt
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-lv-pvc
EOF
----
2 changes: 1 addition & 1 deletion modules/persistent-storage-csi-driver-daemonset.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,6 @@ UNIX Domain Socket available on the node.
* A CSI driver.

The CSI driver deployed on the node should have as few credentials to the
storage back end as possible. {product-title} will only use the node plug-in
storage back end as possible. {product-title} will only use the node plugin
set of CSI calls such as `NodePublish`/`NodeUnpublish` and
`NodeStage`/`NodeUnstage`, if these calls are implemented.
4 changes: 2 additions & 2 deletions modules/storage-expanding-flexvolume.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ Similar to other volume types, FlexVolume volumes can also be expanded when in u
.Procedure

* To use resizing in the FlexVolume plug-in, you must implement the `ExpandableVolumePlugin` interface using these methods:
* To use resizing in the FlexVolume plugin, you must implement the `ExpandableVolumePlugin` interface using these methods:
`RequiresFSResize`::
If `true`, updates the capacity directly. If `false`, calls the `ExpandFS` method to finish the filesystem resize.
Expand All @@ -33,5 +33,5 @@ If `true`, calls `ExpandFS` to resize filesystem after physical volume expansion

[IMPORTANT]
====
Because {product-title} does not support installation of FlexVolume plug-ins on control plane nodes, it does not support control-plane expansion of FlexVolume.
Because {product-title} does not support installation of FlexVolume plugins on control plane nodes, it does not support control-plane expansion of FlexVolume.
====
2 changes: 1 addition & 1 deletion modules/storage-persistent-storage-lifecycle.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ The reclaim policy of a persistent volume tells the cluster what to do with the
`Retain`, `Recycle`, or `Delete`.

* `Retain` reclaim policy allows manual reclamation of the resource for
those volume plug-ins that support it.
those volume plugins that support it.

* `Recycle` reclaim policy recycles the volume back into the pool of
unbound persistent volumes once it is released from its claim.
Expand Down
2 changes: 1 addition & 1 deletion modules/storage-persistent-storage-overview.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ piece of existing storage in the cluster that was either statically provisioned
by the cluster administrator or dynamically provisioned using a `StorageClass` object. It is a resource in the cluster just like a
node is a cluster resource.

PVs are volume plug-ins like `Volumes` but
PVs are volume plugins like `Volumes` but
have a lifecycle that is independent of any individual pod that uses the
PV. PV objects capture the details of the implementation of the storage,
be that NFS, iSCSI, or a cloud-provider-specific storage system.
Expand Down
2 changes: 1 addition & 1 deletion modules/storage-persistent-storage-pv.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ once it is released.
[id="types-of-persistent-volumes_{context}"]
== Types of PVs

{product-title} supports the following persistent volume plug-ins:
{product-title} supports the following persistent volume plugins:

// - GlusterFS
// - Ceph RBD
Expand Down
Loading

0 comments on commit fa6d9c5

Please sign in to comment.