Skip to content

Commit

Permalink
Removed rec path from hyperlink
Browse files Browse the repository at this point in the history
Udpated the hyperlink by removing rc2 path

Signed-off-by: ranjithwingrider <ranjith.raveendran@mayadata.io>
  • Loading branch information
ranjithwingrider committed Jun 21, 2019
1 parent 8d4e673 commit 891e306
Show file tree
Hide file tree
Showing 33 changed files with 169 additions and 162 deletions.
16 changes: 8 additions & 8 deletions docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Currently, the OpenEBS provisioner supports only one type of binding i.e. iSCSI.

m-apiserver runs as a POD. As the name suggests, m-apiserver exposes the OpenEBS REST APIs.

m-apiserver is also responsible for creating deployment specification files required for creating the volume pods. After generating these specification files, it invokes kube-apiserver for scheduling the pods accordingly. At the end of volume provisioning by the OpenEBS PV provisioner, a Kubernetes object PV is created and is mounted on the application pod. The PV is hosted by the controller pod which is supported by a set of replica pods in different nodes. The controller pod and replica pods are part of the data plane and are described in more detail in the [Storage Engines](/1.0.0-RC2/docs/next/casengines.html) section.
m-apiserver is also responsible for creating deployment specification files required for creating the volume pods. After generating these specification files, it invokes kube-apiserver for scheduling the pods accordingly. At the end of volume provisioning by the OpenEBS PV provisioner, a Kubernetes object PV is created and is mounted on the application pod. The PV is hosted by the controller pod which is supported by a set of replica pods in different nodes. The controller pod and replica pods are part of the data plane and are described in more detail in the [Storage Engines](/docs/next/casengines.html) section.

Another important task of the m-apiserver is volume policy management. OpenEBS provides very granular specification for expressing policies. m-apiserver interprets these YAML specifications, converts them into enforceable components and enforces them through volume-management sidecars.

Expand Down Expand Up @@ -98,15 +98,15 @@ The OpenEBS data plane is responsible for the actual volume IO path. A storage e

### Jiva

Jiva storage engine is developed with Rancher's LongHorn and gotgt as the base. Jiva engine is written in GO language and runs in the user space. LongHorn controller synchronously replicates the incoming IO to the LongHorn replicas. The replica considers a Linux sparse file as the foundation for building the storage features such as thin provisioning, snapshotting, rebuilding etc. More details on Jiva architecture are written [here](/1.0.0-RC2/docs/next/jiva.html).
Jiva storage engine is developed with Rancher's LongHorn and gotgt as the base. Jiva engine is written in GO language and runs in the user space. LongHorn controller synchronously replicates the incoming IO to the LongHorn replicas. The replica considers a Linux sparse file as the foundation for building the storage features such as thin provisioning, snapshotting, rebuilding etc. More details on Jiva architecture are written [here](/docs/next/jiva.html).

### cStor

cStor data engine is written in C and has a high performing iSCSI target and Copy-On-Write block system to provide data integrity, data resiliency and point-in-time snapshots and clones. cStor has a pool feature that aggregates the disks on a node in striped, mirrored or in RAIDZ mode to give larger units of capacity and performance. cStor also provides the synchronous replication of data to multiple nodes even across zones so that node loss or node reboots do not cause unavailability of data. See [here](/1.0.0-RC2/docs/next/cstor.html) for more details on cStor features and architecture.
cStor data engine is written in C and has a high performing iSCSI target and Copy-On-Write block system to provide data integrity, data resiliency and point-in-time snapshots and clones. cStor has a pool feature that aggregates the disks on a node in striped, mirrored or in RAIDZ mode to give larger units of capacity and performance. cStor also provides the synchronous replication of data to multiple nodes even across zones so that node loss or node reboots do not cause unavailability of data. See [here](/docs/next/cstor.html) for more details on cStor features and architecture.

### LocalPV

For those applications that do not need storage level replication, LocalPV may be good option as it gives higher performance. OpenEBS LocalPV is similar to Kubernetes LocalPV except that it is dynamically provisioned by OpenEBS control plane, just like any other regular PV. OpenEBS LocalPV is of two types - `hostpath` LocalPV or `device` LocalPV. `hostpath` LocalPV refers to a subdirectory on the host and `device` LocalPV referes to a discovered disk (either directly attached or network attached) on the node. OpenEBS has introduced a LocalPV provisioner for selecting a matching disk or hostpath against some criteria in PVC and storage class specifications. For more details on OpenEBS LocalPV, see [here](/1.0.0-RC2/docs/next/localpv.html).
For those applications that do not need storage level replication, LocalPV may be good option as it gives higher performance. OpenEBS LocalPV is similar to Kubernetes LocalPV except that it is dynamically provisioned by OpenEBS control plane, just like any other regular PV. OpenEBS LocalPV is of two types - `hostpath` LocalPV or `device` LocalPV. `hostpath` LocalPV refers to a subdirectory on the host and `device` LocalPV referes to a discovered disk (either directly attached or network attached) on the node. OpenEBS has introduced a LocalPV provisioner for selecting a matching disk or hostpath against some criteria in PVC and storage class specifications. For more details on OpenEBS LocalPV, see [here](/docs/next/localpv.html).



Expand Down Expand Up @@ -143,13 +143,13 @@ Node Disk Manager components, volume pods, and other persistent storage structur
## See Also:


### [Understanding cStor](/1.0.0-RC2/docs/next/cstor.html)
### [Understanding cStor](/docs/next/cstor.html)

### [Understanding Jiva ](/1.0.0-RC2/docs/next/jiva.html)
### [Understanding Jiva ](/docs/next/jiva.html)

### [Understanding Local PV](/1.0.0-RC2/docs/next/localpv.html)
### [Understanding Local PV](/docs/next/localpv.html)

### [Understanding NDM](/1.0.0-RC2/docs/next/ndm.html)
### [Understanding NDM](/docs/next/ndm.html)


<br>
Expand Down
2 changes: 1 addition & 1 deletion docs/cas.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ Similar to hyperconverged systems, storage and performance of a volume in CAS is

## See Also:

### [OpenEBS architecture](/1.0.0-RC2/docs/next/architecture.html)
### [OpenEBS architecture](/docs/next/architecture.html)

### [CAS blog on CNCF](https://www.cncf.io/blog/2018/04/19/container-attached-storage-a-primer/)

Expand Down
12 changes: 6 additions & 6 deletions docs/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,13 +65,13 @@ As shown above, OpenEBS volumes need to be configured with three replicas for hi

1. **Install OpenEBS**

If OpenEBS is not installed in your K8s cluster, this can done from [here](/1.0.0-RC2/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/next/installation.html). If OpenEBS is already installed, go to the next step.

2. **Connect to MayaOnline (Optional)** : Connecting the Kubernetes cluster to <a href="https://mayaonline.io" target="_blank">MayaOnline</a> provides good visibility of storage resources. MayaOnline has various **support options for enterprise customers**.

3. **Configure cStor Pool**

After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/1.0.0-RC2/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. . If cStor pool is already configured, go to the next step.
After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. . If cStor pool is already configured, go to the next step.

4. **Create Storage Class**

Expand Down Expand Up @@ -106,7 +106,7 @@ It is not seamless to increase the cStor volume size (refer to the roadmap item)

**Monitor cStor Pool size**

As in most cases, cStor pool may not be dedicated to just Cassandra database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/1.0.0-RC2/docs/next/ugcstor.html#monitor-pool).
As in most cases, cStor pool may not be dedicated to just Cassandra database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/docs/next/ugcstor.html#monitor-pool).



Expand Down Expand Up @@ -276,11 +276,11 @@ spec:

<br>

### [OpenEBS architecture](/1.0.0-RC2/docs/next/architecture.html)
### [OpenEBS architecture](/docs/next/architecture.html)

### [OpenEBS use cases](/1.0.0-RC2/docs/next/usecases.html)
### [OpenEBS use cases](/docs/next/usecases.html)

### [cStor pools overview](/1.0.0-RC2/docs/next/cstor.html#cstor-pools)
### [cStor pools overview](/docs/next/cstor.html#cstor-pools)



Expand Down
12 changes: 6 additions & 6 deletions docs/eleasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,13 @@ Advantages of using OpenEBS for ElasticSearch database:

1. **Install OpenEBS**

If OpenEBS is not installed in your K8s cluster, this can done from [here](/1.0.0-RC2/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/next/installation.html). If OpenEBS is already installed, go to the next step.

2. **Connect to MayaOnline (Optional)** : Connecting the Kubernetes cluster to <a href="https://mayaonline.io" target="_blank">MayaOnline</a> provides good visibility of storage resources. MayaOnline has various **support options for enterprise customers**.

3. **Configure cStor Pool**

After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/1.0.0-RC2/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. If cStor pool is already configured, go to the next step.
After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. If cStor pool is already configured, go to the next step.

4. **Create Storage Class**

Expand Down Expand Up @@ -117,7 +117,7 @@ It is not seamless to increase the cStor volume size (refer to the roadmap item)

**Monitor cStor Pool size**

As in most cases, cStor pool may not be dedicated to just elasticsearch database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/1.0.0-RC2/docs/next/ugcstor.html#monitor-pool).
As in most cases, cStor pool may not be dedicated to just elasticsearch database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/docs/next/ugcstor.html#monitor-pool).



Expand Down Expand Up @@ -198,11 +198,11 @@ reclaimPolicy: Delete

<br>

### [OpenEBS architecture](/1.0.0-RC2/docs/next/architecture.html)
### [OpenEBS architecture](/docs/next/architecture.html)

### [OpenEBS use cases](/1.0.0-RC2/docs/next/usecases.html)
### [OpenEBS use cases](/docs/next/usecases.html)

### [cStor pools overview](/1.0.0-RC2/docs/next/cstor.html#cstor-pools)
### [cStor pools overview](/docs/next/cstor.html#cstor-pools)



Expand Down
10 changes: 5 additions & 5 deletions docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -481,7 +481,7 @@ As of 0.8.0, the user is allowed to create PVCs that cross the available capacit

<h3><a class="anchor" aria-hidden="true" id="what-is-the-difference-between-cstor-pool-creation-using-manual-method-and-auto-method"></a>What is the difference between cStor Pool creation using manual method and auto method?</h3>

By using manual method, you must give the selected disk name which is listed by NDM. This details has to be entered in the StoragePoolClaim YAML under `diskList`. See [storage pool](/docs/next/setupstoragepools.html#by-using-manual-method) for more info.
By using manual method, you must give the selected disk name which is listed by NDM. This details has to be entered in the StoragePoolClaim YAML under `diskList`. See [storage pool](/docs/next/ugcstor.html#creating-cStor-storage-pools) for more info.
It is also possible to change `maxPools` count and `poolType` in the StoragePoolClaim YAML.
Consider you have 4 nodes with 2 disks each. You can select `maxPools` count as 3, then cStor pool will create in any 3 nodes out of 4. The remaining disks belongs to 4th Node can be used for horizontal scale up in future.
Advantage is that there is no restriction in the number of disks for the creation of cStor storage pool using `striped` or `mirrored` Type.
Expand Down Expand Up @@ -618,13 +618,13 @@ No. It is recommended to create different BDC name for claiming an unclaimed dis

## See Also:

### [Creating cStor Pool](/1.0.0-RC2/docs/next/ugcstor.html#creating-cStor-storage-pools)
### [Creating cStor Pool](/docs/next/ugcstor.html#creating-cStor-storage-pools)

### [Provisioning cStor volumes](/1.0.0-RC2/docs/next/ugcstor.html#provisioning-a-cStor-volume)
### [Provisioning cStor volumes](/docs/next/ugcstor.html#provisioning-a-cStor-volume)

### [BackUp and Restore](/1.0.0-RC2/docs/next/backup.html)
### [BackUp and Restore](/docs/next/backup.html)

### [Uninstall](/1.0.0-RC2/docs/next/uninstall.html)
### [Uninstall](/docs/next/uninstall.html)

<br>

Expand Down
8 changes: 4 additions & 4 deletions docs/features.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ sidebar_label: Features and Benefits



For more information on how OpenEBS is used in cloud native environments, visit the [use cases](/1.0.0-RC2/docs/next/usecases.html) section.
For more information on how OpenEBS is used in cloud native environments, visit the [use cases](/docs/next/usecases.html) section.



Expand Down Expand Up @@ -183,11 +183,11 @@ MayaOnline is the SaaS service for OpenEBS enabled Kubernetes clusters that prov

## See Also:

### [Object Storage with OpenEBS](/1.0.0-RC2/docs/next/minio.html)
### [Object Storage with OpenEBS](/docs/next/minio.html)

### [RWM PVs with OpenEBS](/1.0.0-RC2/docs/next/rwm.html)
### [RWM PVs with OpenEBS](/docs/next/rwm.html)

### [Local storage for Prometheus ](/1.0.0-RC2/docs/next/prometheus.html)
### [Local storage for Prometheus ](/docs/next/prometheus.html)

<br>

Expand Down
14 changes: 7 additions & 7 deletions docs/gitlab.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,13 +57,13 @@ GitLab is a good solution for building On-Premise cloud native CI/CD platforms,

1. **Install OpenEBS**

If OpenEBS is not installed in your K8s cluster, this can done from [here](/1.0.0-RC2/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/next/installation.html). If OpenEBS is already installed, go to the next step.

2. **Connect to MayaOnline (Optional)** : Connecting the Kubernetes cluster to <a href="https://mayaonline.io" target="_blank">MayaOnline</a> provides good visibility of storage resources. MayaOnline has various **support options for enterprise customers**.

3. **Configure cStor Pool**

After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/1.0.0-RC2/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. . If cStor pool is already configured, go to the next step.
After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/docs/next/ugcstor.html#creating-cStor-storage-pools). During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. . If cStor pool is already configured, go to the next step.

4. **Create Storage Class**

Expand Down Expand Up @@ -108,11 +108,11 @@ It is not seamless to increase the cStor volume size (refer to the roadmap item)

**Monitor cStor Pool size**

As in most cases, cStor pool may not be dedicated to just GitLab's databases alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/1.0.0-RC2/docs/next/ugcstor.html#monitor-pool).
As in most cases, cStor pool may not be dedicated to just GitLab's databases alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See [cStorPool metrics](/docs/next/ugcstor.html#monitor-pool).

**Maintain volume replica quorum during node upgrades**

cStor volume replicas need to be in quorum when applications are deployed as `deployment` and cStor volume is configured to have `3 replicas`. Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See [here](/1.0.0-RC2/docs/next/k8supgrades.html) for more details.
cStor volume replicas need to be in quorum when applications are deployed as `deployment` and cStor volume is configured to have `3 replicas`. Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See [here](/docs/next/k8supgrades.html) for more details.

<br>

Expand Down Expand Up @@ -185,11 +185,11 @@ reclaimPolicy: Delete

<br>

### [OpenEBS architecture](/1.0.0-RC2/docs/next/architecture.html)
### [OpenEBS architecture](/docs/next/architecture.html)

### [OpenEBS use cases](/1.0.0-RC2/docs/next/usecases.html)
### [OpenEBS use cases](/docs/next/usecases.html)

### [cStor pools overview](/1.0.0-RC2/docs/next/cstor.html#cstor-pools)
### [cStor pools overview](/docs/next/cstor.html#cstor-pools)



Expand Down
10 changes: 7 additions & 3 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ sidebar_label: Installation
- Set Kubernetes [admin context](#set-cluster-admin-user-context-and-rbac) and RBAC

- Installation
- **[helm](/docs/next/installation.html#installation-through-helm) chart** `(or)`
- **[helm](#installation-through-helm) chart** `(or)`
- **[kubectl yaml](#installation-through-kubectl) spec file**

- [Verify](#verifying-openebs-installation) installation

- Installation [troubleshooting](/1.0.0-RC2/docs/next/troubleshooting.html#installation)
- Installation [troubleshooting](/docs/next/troubleshooting.html#installation)

- [Post installation](#post-installation-considerations)

Expand All @@ -41,7 +41,7 @@ sidebar_label: Installation

<br>

iSCSI client is a pre-requisite for provisioning cStor and Jiva volumes. However, it is recommended that the [iSCSI client is setup](/1.0.0-RC2/docs/next/prerequisites.html) and iscsid service is running on worker nodes before proceeding with the OpenEBS installation.
iSCSI client is a pre-requisite for provisioning cStor and Jiva volumes. However, it is recommended that the [iSCSI client is setup](/docs/next/prerequisites.html) and iscsid service is running on worker nodes before proceeding with the OpenEBS installation.

<br>

Expand Down Expand Up @@ -160,6 +160,8 @@ As a next step [verify](#verifying-openebs-installation) your installation and d

<br>

<hr>

## Installation through kubectl

In the **default installation mode**, use the following command to install OpenEBS. OpenEBS is installed in openebs namespace.
Expand Down Expand Up @@ -229,6 +231,8 @@ See an example configuration [here](#example-diskfilter-yaml)

<br>



<font size="5">Configure Environmental Variable</font>

Some of the configurations related to cStor Target, default cStor sparse pool, Local PV Basepath, etc can be configured as environmental variable in the corresponding deployment specification.
Expand Down
Loading

0 comments on commit 891e306

Please sign in to comment.