Skip to content

Commit 9774cc7

Browse files
author
Traci Morrison
authored
Merge pull request #10980 from tmorriso-rh/storage-terminology-change
Update name change for CNS and CRS
2 parents 0739718 + 0615bf5 commit 9774cc7

File tree

5 files changed

+14
-15
lines changed

5 files changed

+14
-15
lines changed

_snippets/glusterfs.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,8 @@ How to use this file:
1919
:gluster-role-link: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
2020
ifdef::openshift-enterprise[]
2121
:gluster: Red Hat Gluster Storage
22-
:gluster-native: Container-Native Storage
23-
:gluster-external: Container-Ready Storage
22+
:gluster-native: converged mode
23+
:gluster-external: independent mode
2424
:gluster-install-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/installation_guide/
2525
:gluster-admin-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/
2626
:cns-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/

install/disconnected_install.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,7 +262,7 @@ $ docker pull registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
262262
+
263263
[IMPORTANT]
264264
====
265-
For Red Hat support, a Container-Native Storage (CNS) subscription is required for `rhgs3/` images.
265+
For Red Hat support, a {gluster-native} subscription is required for `rhgs3/` images.
266266
====
267267
+
268268
[IMPORTANT]
@@ -413,7 +413,7 @@ $ docker save -o ose3-images.tar \
413413
+
414414
[IMPORTANT]
415415
====
416-
For Red Hat support, a CNS subscription is required for `rhgs3/` images.
416+
For Red Hat support, a {gluster-native} subscription is required for `rhgs3/` images.
417417
====
418418

419419
. If you synchronized the metrics and log aggregation images, export:

install_config/persistent_storage/topics/glusterfs_overview_containerized.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,6 @@ image::OpenShift_Containerization_Gluster_412816_0716_JCS_converged.png["Archite
77

88
ifdef::openshift-enterprise[]
99
{gluster-native} is available starting with {gluster} 3.1 update 3. See
10-
link:{cns-link}[Container-Native Storage for OpenShift Container Platform] for
10+
link:{cns-link}[{gluster-native} for OpenShift Container Platform] for
1111
additional documentation.
1212
endif::[]

release_notes/ocp_3_10_release_notes.adoc

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -79,8 +79,7 @@ Persistent volume (PV) resize is currently in
7979
xref:ocp-310-technology-preview[Technology Preview] and not for production
8080
workloads.
8181

82-
You can expand persistent volume claims online from {product-title} for CNS
83-
glusterFS.
82+
You can expand persistent volume claims online from {product-title} for {gluster-native} glusterFS.
8483

8584
. Create a storage class with `allowVolumeExpansion=true`.
8685
. The PVC uses the storage class and submits a claim.

scaling_performance/optimizing_storage.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -41,24 +41,24 @@ a|* Presented to the operating system (OS) as a block device
4141
bypassing the file system
4242
* Also referred to as a Storage Area Network (SAN)
4343
* Non-shareable, which means that only one client at a time can mount an endpoint of this type
44-
| CNS/CRS GlusterFS footnoteref:[dynamicPV,CNS/CRS GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk
44+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV,{gluster-native}/{gluster-external} GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk
4545

4646
|File
4747
a| * Presented to the OS as a file system export to be mounted
4848
* Also referred to as Network Attached Storage (NAS)
4949
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
50-
| CNS/CRS GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS
50+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS
5151

5252
| Object
5353
a| * Accessible through a REST API endpoint
5454
* Configurable for use in the {product-title} Registry
5555
* Applications must build their drivers into the application and/or container.
56-
| CNS/CRS GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
56+
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
5757
|===
5858

5959
[NOTE]
6060
====
61-
As of {product-title} 3.6.1, Container-Native Storage (CNS) GlusterFS (a hyperconverged or cluster-hosted storage solution) and Container-Ready Storage (CRS)
61+
As of {product-title} 3.6.1, {gluster-native} GlusterFS (a hyperconverged or cluster-hosted storage solution) and {gluster-external}
6262
GlusterFS (an externally hosted storage solution) provides interfaces for block, file, and object storage for the purpose of the {product-title} registry, logging, and metrics.
6363
====
6464

@@ -115,7 +115,7 @@ In a non-scaled/high-availability (HA) {product-title} registry cluster deployme
115115

116116
* The preferred storage technology is object storage followed by block storage. The
117117
storage technology does not need to support RWX access mode.
118-
* The storage technology must ensure read-after-write consistency. All NAS storage (excluding CNS/CRS GlusterFS as it uses an object storage interface) are not
118+
* The storage technology must ensure read-after-write consistency. All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are not
119119
recommended for {product-title} Registry cluster deployment with production workloads.
120120
* While `hostPath` volumes are configurable for a non-scaled/HA {product-title} Registry, they are not recommended for cluster deployment.
121121

@@ -131,7 +131,7 @@ In a scaled/HA {product-title} registry cluster deployment:
131131

132132
* The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency.
133133
* File storage and block storage are not recommended for a scaled/HA {product-title} registry cluster deployment with production workloads.
134-
* All NAS storage (excluding CNS/CRS GlusterFS as it uses an object storage interface) are
134+
* All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are
135135
not recommended for {product-title} Registry cluster deployment with production workloads.
136136

137137
[WARNING]
@@ -145,7 +145,7 @@ Corruption may occur when using NFS to back {product-title} scaled/HA registry w
145145
In an {product-title} hosted metrics cluster deployment:
146146

147147
* The preferred storage technology is block storage.
148-
* It is not recommended to use NAS storage (excluding CNS/CRS GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
148+
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
149149

150150
[WARNING]
151151
====
@@ -158,7 +158,7 @@ Corruption may occur when using NFS to back a hosted metrics cluster deployment
158158
In an {product-title} hosted logging cluster deployment:
159159

160160
* The preferred storage technology is block storage.
161-
* It is not recommended to use NAS storage (excluding CNS/CRS GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
161+
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
162162

163163
[WARNING]
164164
====

0 commit comments

Comments
 (0)