Skip to content

Commit

Permalink
Update minio.md
Browse files Browse the repository at this point in the history
Updated with similar content of Prometheus page
Signed-off-by: ranjithwingrider <ranjith.raveendran@openebs.io>
  • Loading branch information
ranjithwingrider committed Feb 12, 2019
1 parent a776d8f commit 0258edf
Showing 1 changed file with 56 additions and 81 deletions.
137 changes: 56 additions & 81 deletions docs/minio.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,117 +5,86 @@ sidebar_label: Minio
---
------

<img src="/docs/assets/o-minio.png" alt="OpenEBS and Prometheus" style="width:400px;">
<img src="/docs/assets/o-minio.png" alt="OpenEBS and Minio" style="width:400px;">

## Introduction

Minio is an object storage server released under Apache License v2.0. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB. In this solution , running a Minio server pod which consumes OpenEBS cStor volume to store the data in a kubernetes cluster.
Minio is an object storage server released under Apache License v2.0. It is best suited for storing unstructured data such as photos, videos, log files, backups and container / VM images. Size of an object can range from a few KBs to a maximum of 5TB. In this solution , running a Minio server pod which consumes OpenEBS cStor volume to store these type of data as object storage in a kubernetes cluster.

## Requirements

1. **Install OpenEBS**

If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
## Deployment model

2. **Configure cStor Pool**

If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/docs/next/configurepools.html). If cStor pool is already configured, go to the next step. Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below.

3. **Create Storage Class**
## Configuration workflow

You must configure a StorageClass to provision cStor volume on cStor pool. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. The storage pool is created using the steps provided in the [Configure StoragePool](/docs/next/configurepools.html) section. Since Minio is a deployment, it requires high availability of data. So cStor voume `replicaCount` is 3. Sample YAML named **openebs-sc-disk.yaml**to consume cStor pool with cStoveVolume Replica count as 3 is provided in the configuration details below.
1. **Install OpenEBS**

## Deployment of Minio object storage with OpenEBS
If OpenEBS is not installed in your K8s cluster, this can done from [here](/docs/next/installation.html). If OpenEBS is already installed, go to the next step.

2. **Connect to MayaOnline (Optional)** : Connecting the Kubernetes cluster to [MayaOnline](https://staging-docs.openebs.io/docs/next/app.mayaonline.io) provides good visibility of storage resources. MayaOnline has various **support options for enterprise customers**.

3. **Configure cStor Pool**

<br>
After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from [here](/docs/next/configurepools.html). Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided in the Configuration details below. During cStor Pool creation, make sure that the maxPools parameter is set to >=4. If cStor pool is already configured, go to the next step.

<img src="/docs/assets/minio-deployment.png" alt="OpenEBS and minio" style="width:1000px;">
4. **Create Storage Class**

<br>
You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. Since Minio is a deployment, it requires high availability of data. So cStor voume `replicaCount` is >=4. Sample YAML named **openebs-sc-disk.yaml**to consume cStor pool with cStoveVolume Replica count as 4 is provided in the configuration details below.

Sample Minio deployment YAML is provided in the Configuration below. Create a YAML file called **minio.yaml** and add the YAML content from **minio.yaml** provided in the Configuration Details section.
5. **Configure PVC**

Apply the **minio.yaml** using the below command .
Minio needs only one volume to store the data with a replication factor of 4. See **minio-pv-claim.yaml** below.

```
kubectl apply -f minio.yaml
```
6. **Launch and test Minio**

**Verify Minio Pods**
Created a sample `minio.yaml` file in the Configuration details section. This can be applied to deploy Minio Object storage with OpenEBS. Run `kubectl apply -f minio.yaml` to see Minio running. Otherwise you can use stable minio image with helm to deploy Minio in your cluster using the following command.

Run the following to get the status of Minio pods.
```
helm install --set accessKey=minio,secretKey=minio123 --storage-class=openebs-cstor-disk stable/minio
```

```
kubectl get pods
```
For more information on configuring more services to be monitored, see Minio [documentation](https://docs.minio.io/docs/deploy-minio-on-kubernetes).

Following is an example output.

```
NAME READY STATUS RESTARTS AGE
minio-deployment-64d7c79464-wldr5 1/1 Running 0 54s
```
## Reference at [openebs.ci](https://openebs.ci/)

**Verify Minion Services**
A live deployment of Minio using OpenEBS volumes as highly available object storage can be seen at the website [www.openebs.ci](https://openebs.ci/)

Run the following to get the service details of Minio pods.
Deployment YAML spec files for Minio and OpenEBS resources are found [here]()

```
kubectl get svc
```
[OpenEBS-CI dashboard of Minio]()

Following is an example output.
[Live access to Minio dashboard]()

```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 14m
minio-service NodePort 10.15.250.174 <none> 9000:32701/TCP 1m
```

## Verify Successful Deployment of Minio

The minio is deployed with Node port service. So the Minio service can be accessed using External IP of one of the Node and corresponding service port. You can get the Node details using following command.

```
kubectl get nodes -o wide
```

Following is an example output.

```
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gke-ranjith-minio-default-pool-b4985804-0qp7 Ready <none> 14m v1.11.6-gke.2 35.188.69.194 Ubuntu 18.04.1 LTS 4.15.0-1023-gcp docker://17.3.2
gke-ranjith-minio-default-pool-b4985804-ff07 Ready <none> 14m v1.11.6-gke.2 104.154.176.87 Ubuntu 18.04.1 LTS 4.15.0-1023-gcp docker://17.3.2
gke-ranjith-minio-default-pool-b4985804-kcvv Ready <none> 14m v1.11.6-gke.2 35.192.103.51 Ubuntu 18.04.1 LTS 4.15.0-1023-gcp docker://17.3.2
```
## Post deployment Operations

External IP of one of the Node is 35.188.69.194 . You can access minio object storage using 35.188.69.194:32701.
**Monitor OpenEBS Volume size**

![Home](/docs/assets/Home.PNG)
It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration. However, an alert can be setup for volume size threshold using MayaOnline.

You can enter access key as "minio" and Secret Key as "minio123".
**Monitor cStor Pool size**

![home_key](/docs/assets/Home1.PNG)
As in most cases, cStor pool may not be dedicated to just Minio Object storage alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold.

You can create a bucket from the "+" button showing in the left bottom side.

![bucket](/docs/assets/bucket.PNG)

You can upload a file using "upload" button.
## Best Practices:

![uplaod-button](/docs/assets/Upload_button.PNG)
**Maintain volume replica quorum always**

Verify the upload is successful.
**Maintain cStor pool used capacity below 80%**

![finalfile](/docs/assets/Uploaded.PNG)

## Best Practices:

## Troubleshooting Guidelines

**Read-Only volume**

## Troubleshooting Guidelines
**Snapshots were failing**



Expand Down Expand Up @@ -166,12 +135,32 @@ metadata:
- name: StoragePoolClaim
value: "cstor-disk"
- name: ReplicaCount
value: "3"
value: "4"
provisioner: openebs.io/provisioner-iscsi
reclaimPolicy: Delete
---
```

**minio-pv-claim.yaml**

```
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: openebs-cstor-disk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50G
---
```

**minio.yaml**

```
Expand Down Expand Up @@ -221,20 +210,6 @@ spec:
mountPath: "/home/username"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pv-claim
labels:
app: minio-storage-claim
spec:
storageClassName: openebs-cstor-disk
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
---
apiVersion: v1
kind: Service
metadata:
name: minio-service
Expand Down

0 comments on commit 0258edf

Please sign in to comment.