Skip to content

Commit

Permalink
Merge pull request #10980 from tmorriso-rh/storage-terminology-change
Browse files Browse the repository at this point in the history
Update name change for CNS and CRS
  • Loading branch information
Traci Morrison authored Jul 23, 2018
2 parents 0739718 + 0615bf5 commit 9774cc7
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 15 deletions.
4 changes: 2 additions & 2 deletions _snippets/glusterfs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ How to use this file:
:gluster-role-link: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_storage_glusterfs
ifdef::openshift-enterprise[]
:gluster: Red Hat Gluster Storage
:gluster-native: Container-Native Storage
:gluster-external: Container-Ready Storage
:gluster-native: converged mode
:gluster-external: independent mode
:gluster-install-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/installation_guide/
:gluster-admin-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/
:cns-link: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/container-native_storage_for_openshift_container_platform/
Expand Down
4 changes: 2 additions & 2 deletions install/disconnected_install.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ $ docker pull registry.access.redhat.com/rhgs3/rhgs-s3-server-rhel7
+
[IMPORTANT]
====
For Red Hat support, a Container-Native Storage (CNS) subscription is required for `rhgs3/` images.
For Red Hat support, a {gluster-native} subscription is required for `rhgs3/` images.
====
+
[IMPORTANT]
Expand Down Expand Up @@ -413,7 +413,7 @@ $ docker save -o ose3-images.tar \
+
[IMPORTANT]
====
For Red Hat support, a CNS subscription is required for `rhgs3/` images.
For Red Hat support, a {gluster-native} subscription is required for `rhgs3/` images.
====

. If you synchronized the metrics and log aggregation images, export:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ image::OpenShift_Containerization_Gluster_412816_0716_JCS_converged.png["Archite

ifdef::openshift-enterprise[]
{gluster-native} is available starting with {gluster} 3.1 update 3. See
link:{cns-link}[Container-Native Storage for OpenShift Container Platform] for
link:{cns-link}[{gluster-native} for OpenShift Container Platform] for
additional documentation.
endif::[]
3 changes: 1 addition & 2 deletions release_notes/ocp_3_10_release_notes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,7 @@ Persistent volume (PV) resize is currently in
xref:ocp-310-technology-preview[Technology Preview] and not for production
workloads.

You can expand persistent volume claims online from {product-title} for CNS
glusterFS.
You can expand persistent volume claims online from {product-title} for {gluster-native} glusterFS.

. Create a storage class with `allowVolumeExpansion=true`.
. The PVC uses the storage class and submits a claim.
Expand Down
16 changes: 8 additions & 8 deletions scaling_performance/optimizing_storage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -41,24 +41,24 @@ a|* Presented to the operating system (OS) as a block device
bypassing the file system
* Also referred to as a Storage Area Network (SAN)
* Non-shareable, which means that only one client at a time can mount an endpoint of this type
| CNS/CRS GlusterFS footnoteref:[dynamicPV,CNS/CRS GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV,{gluster-native}/{gluster-external} GlusterFS, Ceph RBD, OpenStack Cinder, AWS EBS, Azure Disk, GCE persistent disk, and VMware vSphere support dynamic persistent volume (PV) provisioning natively in {product-title}.] iSCSI, Fibre Channel, Ceph RBD, OpenStack Cinder, AWS EBS footnoteref:[dynamicPV], Dell/EMC Scale.IO, VMware vSphere Volume, GCE Persistent Disk footnoteref:[dynamicPV], Azure Disk

|File
a| * Presented to the OS as a file system export to be mounted
* Also referred to as Network Attached Storage (NAS)
* Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales.
| CNS/CRS GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], RHEL NFS, NetApp NFS footnoteref:[netappnfs,NetApp NFS supports dynamic PV provisioning when using the Trident plugin.] , Azure File, Vendor NFS, Vendor GlusterFS footnoteref:[glusterfs, Vendor GlusterFS, Vendor S3, and Vendor Swift supportability and configurability may vary.], Azure File, AWS EFS

| Object
a| * Accessible through a REST API endpoint
* Configurable for use in the {product-title} Registry
* Applications must build their drivers into the application and/or container.
| CNS/CRS GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
| {gluster-native}/{gluster-external} GlusterFS footnoteref:[dynamicPV], Ceph Object Storage (RADOS Gateway), OpenStack Swift, Aliyun OSS, AWS S3, Google Cloud Storage, Azure Blob Storage, Vendor S3 footnoteref:[glusterfs], Vendor Swift footnoteref:[glusterfs]
|===

[NOTE]
====
As of {product-title} 3.6.1, Container-Native Storage (CNS) GlusterFS (a hyperconverged or cluster-hosted storage solution) and Container-Ready Storage (CRS)
As of {product-title} 3.6.1, {gluster-native} GlusterFS (a hyperconverged or cluster-hosted storage solution) and {gluster-external}
GlusterFS (an externally hosted storage solution) provides interfaces for block, file, and object storage for the purpose of the {product-title} registry, logging, and metrics.
====

Expand Down Expand Up @@ -115,7 +115,7 @@ In a non-scaled/high-availability (HA) {product-title} registry cluster deployme

* The preferred storage technology is object storage followed by block storage. The
storage technology does not need to support RWX access mode.
* The storage technology must ensure read-after-write consistency. All NAS storage (excluding CNS/CRS GlusterFS as it uses an object storage interface) are not
* The storage technology must ensure read-after-write consistency. All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are not
recommended for {product-title} Registry cluster deployment with production workloads.
* While `hostPath` volumes are configurable for a non-scaled/HA {product-title} Registry, they are not recommended for cluster deployment.

Expand All @@ -131,7 +131,7 @@ In a scaled/HA {product-title} registry cluster deployment:

* The preferred storage technology is object storage. The storage technology must support RWX access mode and must ensure read-after-write consistency.
* File storage and block storage are not recommended for a scaled/HA {product-title} registry cluster deployment with production workloads.
* All NAS storage (excluding CNS/CRS GlusterFS as it uses an object storage interface) are
* All NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses an object storage interface) are
not recommended for {product-title} Registry cluster deployment with production workloads.

[WARNING]
Expand All @@ -145,7 +145,7 @@ Corruption may occur when using NFS to back {product-title} scaled/HA registry w
In an {product-title} hosted metrics cluster deployment:

* The preferred storage technology is block storage.
* It is not recommended to use NAS storage (excluding CNS/CRS GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.

[WARNING]
====
Expand All @@ -158,7 +158,7 @@ Corruption may occur when using NFS to back a hosted metrics cluster deployment
In an {product-title} hosted logging cluster deployment:

* The preferred storage technology is block storage.
* It is not recommended to use NAS storage (excluding CNS/CRS GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.
* It is not recommended to use NAS storage (excluding {gluster-native}/{gluster-external} GlusterFS as it uses a block storage interface from iSCSI) for a hosted metrics cluster deployment with production workloads.

[WARNING]
====
Expand Down

0 comments on commit 9774cc7

Please sign in to comment.