diff --git a/docs/graffle/architecture-r.graffle b/docs/graffle/architecture-r.graffle new file mode 100644 index 0000000000..ef4395b5a7 Binary files /dev/null and b/docs/graffle/architecture-r.graffle differ diff --git a/docs/graffle/architecture-rw.graffle b/docs/graffle/architecture-rw.graffle new file mode 100644 index 0000000000..142d9a982b Binary files /dev/null and b/docs/graffle/architecture-rw.graffle differ diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml index 97985c5253..ddc98c4c75 100644 --- a/docs/mkdocs.yml +++ b/docs/mkdocs.yml @@ -18,3 +18,10 @@ markdown_extensions: nav: - Cloud Native PostgreSQL: 'index.md' + - Before you start: 'before_you_start.md' + - Architecture: 'architecture.md' + - Quickstart: 'quickstart.md' + - Custom Resource Definitions: 'crd.md' + - Configuration samples: 'samples.md' + - End-to-end tests: 'e2e.md' + - Credits and license: 'credits.md' diff --git a/docs/src/architecture.md b/docs/src/architecture.md new file mode 100644 index 0000000000..565bc613dd --- /dev/null +++ b/docs/src/architecture.md @@ -0,0 +1,114 @@ +For High Availability goals, the PostgreSQL database management system provides administrators with built-in **physical replication** capabilities based on **Write Ahead Log (WAL) shipping**. + +PostgreSQL supports both asynchronous and synchronous streaming replication, as well as asynchronous file-based log shipping (normally used as fallback option, for example to store WAL files in an object store). Replicas are normally called *standby servers* and can also be used for read-only workloads thanks to the *Hot Standby* feature. + +Cloud Native PostgreSQL currently supports clusters based on asynchronous streaming replication to manage multiple hot standby replicas, with the following specifications: + +* One primary, with optional multiple hot standby replicas for High Availability +* Available services for applications: + * `-rw`: applications connect to the only primary instance of the cluster + * `-r`: applications connect to any of the instances for read-only workloads +* Shared-nothing architecture recommended for better resilience of the PostgreSQL cluster: + * PostgreSQL instances should reside on different Kubernetes worker nodes and share only the network + * PostgreSQL instances can reside in different availability zones in the same region + * All nodes of a PostgreSQL cluster should reside in the same region + +### Read-write workloads + +Applications can decide to connect to the PostgreSQL instance elected as *current primary* +by the Kubernetes operator, as depicted in the following diagram: + +![Applications writing to the single master](./images/architecture-rw.png) + +Applications can simply use the `-rw` suffix service. + +In case of temporary or permanent unavailability of the master, Kubernetes +will move the `-rw` to another instance of the cluster for high availability +purposes. + +### Read-only workloads + +!!! Important + Applications must be aware of the limitations that [Hot Standby](https://www.postgresql.org/docs/current/hot-standby.html) + presents and familiar with the way PostgreSQL operates when dealing with these workloads. + +Applications can access to any PostgreSQL instance at any time through the `-r` +service made available by the operator at connection time. + +The following diagram shows the architecture: + +![Applications reading from any instance in round robin](./images/architecture-r.png) + +## Application deployments + +Applications are supposed to work with the services created by Cloud Native PostgreSQL +in the same Kubernetes cluster: + +* `[cluster name]-rw` +* `[cluster name]-r` + +Those services are entirely managed by the Kubernetes cluster and +implement a form of Virtual IP as described in the +["Service" page of the Kubernetes Documentation](https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies). + +!!! Hint + It is highly recommended to use those services in your applications, + and avoid connecting directly to a specific PostgreSQL instance, as the latter + can change during the cluster lifetime. + +You can use these services in your applications through: + +* DNS resolution +* environment variables + +As far as the credentials to connect to PostgreSQL are concerned, you can +use the secrets generated by the operator. + +!!! Warning + The operator will create another service, named `[cluster name]-any`. That + service is used internally to manage PostgreSQL instance discovery. + It's not supposed to be used directly by applications. + +### DNS resolution + +You can use the Kubernetes DNS service, which is required by this operator, +to point to a given server. +You can do that by just using the name of the service if the application is +deployed in the same namespace as the PostgreSQL cluster. +In case the PostgreSQL cluster resides in a different namespace, you can use the +full qualifier: `service-name.namespace-name`. + +DNS is the preferred and recommended discovery method. + +### Environment variables + +If you deploy your application in the same namespace that contains the +PostgreSQL cluster, you can also use environment variables to connect to the database. + +For example, suppose that your PostgreSQL cluster is called `pg-database`, +you can use the following environment variables in your applications: + +* `PG_DATABASE_R_SERVICE_HOST`: the IP address of the service + pointing to all the PostgreSQL instances for read-only workloads + +* `PG_DATABASE_RW_SERVICE_HOST`: the IP address of the + service pointing to the *primary* instance of the cluster + +### Secrets + +The PostgreSQL operator will generate a secret for every PostgreSQL cluster it deploys. +That secret contains the passwords for the `postgres` user and also for +the *owner* of the database. + +The secret generated has the same name as the `Cluster` resource created +in Kubernetes, and contains two entries: + +* `postgresPassword` - containing the password of the `postgres` user, which + is the superuser defined in the instance; + +* `ownerPassword` - containing the password of the user owning the database. + +The latter credentials are the ones which should be by used the applications +connecting to the PostgreSQL cluster. + +The first one is supposed to be used only for administrative purposes. diff --git a/docs/src/before_you_start.md b/docs/src/before_you_start.md new file mode 100644 index 0000000000..1b0121edcf --- /dev/null +++ b/docs/src/before_you_start.md @@ -0,0 +1,43 @@ +Before we get started, it is important to go over some terminology that is +specific to Kubernetes and PostgreSQL. + +## Kubernetes terminology + +| Resource | Description | +|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| [Node](https://kubernetes.io/docs/concepts/architecture/nodes/) | A *node* is a worker machine in Kubernetes, either virtual or physical, where all services necessary to run pods are managed by the master(s). | +| [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) | A *pod* is the smallest computing unit that can be deployed in a Kubernetes cluster, and is composed by one or more containers that share network and storage. | +| [Service](https://kubernetes.io/docs/concepts/services-networking/service/) | A *service* is an abstraction that exposes as a network service an application that runs on a group of pods, and standardizes important features such as service discovery across applications, load balancing, failover, and so on. | +| [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) | A *secret* is an object that is designed to store small amounts of sensitive data such as passwords, access keys or tokens, and use them in pods. | +| [Storage Class](https://kubernetes.io/docs/concepts/storage/storage-classes/) | A *storage class* allows an administrator to define the classes of storage in a cluster, including provisioner (such as AWS EBS), reclaim policies, mount options, volume expansion, and so on. | +| [Persistent Volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) | A *persistent volume* (PV) is a resource in a Kubernetes cluster that represents storage that has been either manually provisioned by an administrator or dynamically provisioned by a *storage class* controller. A PV is associated to a pod using a *persistent volume claim* and its lifecycle is independent of any pod that uses it. Normally, a PV is a network volume, especially in the Public Cloud. A [*local persistent volume* (LPV)](https://kubernetes.io/docs/concepts/storage/volumes/#local) is a persistent volume that exist only on the particular node where the pod that uses it is running. | +| [Persistent Volume Claim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) | A *persistent volume claim* (PVC) represents a request for storage which might include size, access mode, or a particular storage class. Similarly to how a pod consumes node resources, a PVC consumes resources of a PV. | +| [Namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) | A *namespace* is a logical and isolated subset of a Kubernetes cluster and can be seen as a *virtual cluster* within the wider physical cluster. Namespaces allow administrators to create separate environments, based on projects, departments, teams, and so on. | +| [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) | *Role Based Access Control* (RBAC), known also as *role-based security*, is a method used in computer systems security to restrict access to the network and resources of a system to authorized users only. Kubernetes has a native API to control roles at namespace and cluster level and associate them to specific resources and individuals. | +| [CRD](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) | A *custom resource definition* (CRD) is an extension of the Kubernetes API and allows developers to create new data types and objects, *called custom resources*. | +| [Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) | An *operator* is a custom resource that automates those steps that are normally performed by a human operator when managing one or more applications or given services. An operator assists Kubernetes in making sure that the defined state of the resource always matches the observed one. | +| [kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) | `kubectl` is the command line tool used to manage a Kubernetes cluster. | + +Cloud Native PostgreSQL requires Kubernetes 1.15 or higher. + +## PostgreSQL terminology + +| Resource | Description | +|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Instance | A Postgres server process running and listening on a pair "IP address(es)" and "TCP port" (usually 5432). | +| Primary | A PostgreSQL instance that can accept both read and write operations. | +| Replica | A PostgreSQL instance that is replicating from the only primary instance in a cluster and is kept updated by reading a stream of Write-Ahead Log (WAL) records. A replica is also known as *standby* or *secondary* server. PostgreSQL relies on both physical streaming replication (async/sync) and file-based log shipping (async). | +| Hot Standby | PostgreSQL feature that allows a *replica* to accept read-only workloads. | +| Cluster | To be intended as High Availability (HA) Cluster: a set of PostgreSQL instances made up by a single primary and an optional arbitrary number of replicas. | + +## Cloud terminology + +| Resource | Description | +|-------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Region | A *region* in the Cloud is an isolated and independent geographic area organised in *availability zones*. Zones within a region have very little round-trip network latency. | +| Zone | An *availability zone* in the Cloud (also known as *zone*) is an area in a region where resources can be deployed. Usually, an availability zone corresponds to a data centre or an isolated building of the same data centre. | + +## What to do next + +Now that you have familiarized with terminology, you can decide to +[test Cloud Native PostgreSQL on your laptop using a local cluster](quickstart.md) before you deploy the operator in your selected cloud environment. diff --git a/docs/src/crd.md b/docs/src/crd.md new file mode 100644 index 0000000000..e37bee8f3b --- /dev/null +++ b/docs/src/crd.md @@ -0,0 +1,81 @@ +This section describes the structure of a *Kubernetes manifest* to be used +to instantiate a PostgreSQL cluster using the Cloud Native PostgreSQL Operator. + +A PostgreSQL cluster can be defined using a Kubernetes manifest in *YAML* according to the structure declared by the `Cluster` Custom Resource Definition. + +On the top level both individual parameters and parameter groups can be defined. Parameter names are written in camelCase. + +## PostgreSQL Cluster metadata + +As any other object in Kubernetes, a PostgreSQL cluster has a `metadata` section which allows user to specify the following properties: + +- `namespace`: a DNS compatible label used to group objects +- `name`: a string that uniquely identifies this object within the current namespace in the Kubernetes cluster + +## PostgreSQL Cluster parameters + +A PostgreSQL cluster object can be defined through the following parameters available in the `spec` key of the manifest: + +- `affinity`: affinity/anti-affinity rules for Pods +- `applicationConfiguration`: configuration of the PostgreSQL cluster (*required*) +- `description`: description of the PostgreSQL cluster +- `imageName`: name of the container image for PostgreSQL +- `imagePullSecretName`: secret for pulling the PostgreSQL image +- `instances`: number of instances required in the cluster, with `instances - 1` replicas (**required**) +- `postgresql`: configuration of the PostgreSQL server (*required*) +- `resources`: resources requirements of every generated Pod +- `startDelay`: allowed time in seconds for a PostgreSQL instance to successfully start up (default 30) +- `stopDelay`: allowed time in seconds for a PostgreSQL instance to gracefully shut down (default 30) +- `storage`: configuration of the storage of PostgreSQL instances + +## Application configuration + +Application oriented information, such as database name, is delegated to the `applicationConfiguration` section of the manifest, with the following mandatory parameters, in alphabetical order: + +- `database`: name of the PostgreSQL database in the cluster (e.g. `app`) +- `owner`: name of the owner of the PostgreSQL database + +## PostgreSQL server configuration + +Each PostgreSQL instance can be configured in the `postgresql` section of the manifest, through the following mandatory options: + +- `parameters`: PostgreSQL configuration options to be added to the `postgresql.conf` file +- `pg_hba`: PostgreSQL Host Based Authentication rules, as an array of lines to be appended to the `pg_hba.conf` file + +## Resources + +Cloud Native PostgreSQL allows administrators to control and manage resource usage by the pods of the cluster, +through the `resources` section of the manifest, with two knobs: + +- `requests`: initial requirement +- `limits`: maximum usage, in case of dynamic increase of resource needs + +For example, you can request an initial amount of RAM of 32MiB (scalable to 128MiB) and 50m of CPU (scalable to 100m) as follows: + +```yaml + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" +``` + +[//]: # ( TODO: we may want to explain what happens to a pod that excedes the resource limits: CPU -> trottle; MEMORY -> kill ) + +!!! Seealso "Managing Compute Resources for Containers" + For more details on resource management, please refer to the + ["Managing Compute Resources for Containers"](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) + page from the Kubernetes documentation. + +## Storage configuration + +- `pvcTemplate`: template to be used to generate the Persistent Volume Claim +- `size`: size of the storage (*required* if not already specified in the PVC template) +- `storageClass`: `StorageClass` to use to contain PostgreSQL database (aka `PGDATA`); + the storage class is applied after evaluating the PVC template, if available + +!!! Seealso "See also" + Please refer to the ["Configuration samples" page](samples.md) for examples on storage configuration. + diff --git a/docs/src/credits.md b/docs/src/credits.md new file mode 100644 index 0000000000..529962f2ac --- /dev/null +++ b/docs/src/credits.md @@ -0,0 +1,13 @@ +Cloud Native PostgreSQL has been designed, developed and tested by the 2ndQuadrant team: + +- Leonardo Cecchi +- Marco Nenciarini +- Jonathan Gonzalez +- Francesco Canovai +- Florin Irion +- Jonathan Battiato +- Niccolò Fei +- Gabriele Bartolini + +Copyright (C) 2019-2020 2ndQuadrant Italia SRL. Exclusively licensed to +2ndQuadrant Limited. diff --git a/docs/src/e2e.md b/docs/src/e2e.md new file mode 100644 index 0000000000..a8a47d8b95 --- /dev/null +++ b/docs/src/e2e.md @@ -0,0 +1,23 @@ +To ensure that Cloud Native PostgreSQL is able to act correctly while deploying +and managing PostgreSQL clusters, the operator is automatically tested after each +commit via a suite of **End-to-end (E2E) tests**. + +Moreover, the following Kubernetes versions are tested for each commit, +ensuring failure and bugs detection at an early stage of the development +process: + +* 1.17 +* 1.16 +* 1.15 + +For each tested version of Kubernetes, a Kubernetes cluster is created +using [kind](https://kind.sigs.k8s.io/), and the following suite of +E2E tests are performed on that cluster: + +* Installation of the operator; +* Creation of a `Cluster`; +* Usage of a persistent volume for data storage; +* Scale-up of a `Cluster`; +* Scale-down of a `Cluster`; +* Failover; +* Switchover. diff --git a/docs/src/images/architecture-r.png b/docs/src/images/architecture-r.png new file mode 100644 index 0000000000..912204b796 Binary files /dev/null and b/docs/src/images/architecture-r.png differ diff --git a/docs/src/images/architecture-rw.png b/docs/src/images/architecture-rw.png new file mode 100644 index 0000000000..0a7d74f12f Binary files /dev/null and b/docs/src/images/architecture-rw.png differ diff --git a/docs/src/index.md b/docs/src/index.md index bdcf73e4bf..b7b2f289f1 100644 --- a/docs/src/index.md +++ b/docs/src/index.md @@ -1,6 +1,34 @@ -**Cloud Native PostgreSQL** is a stack designed by [2ndQuadrant](https://www.2ndquadrant.com) to manage PostgreSQL -workloads on Kubernetes, particularly optimised for Private Cloud environments with Local Persistent Volumes (PV). +**Cloud Native PostgreSQL** is a stack designed by [2ndQuadrant](https://www.2ndquadrant.com) +to manage [PostgreSQL](https://www.postgresql.org/) workloads on [Kubernetes](https://kubernetes.io), +particularly optimised for Private Cloud environments with Local Persistent Volumes (PV). Cloud Native PostgreSQL defines a new Kubernetes resource called *Cluster* that represents a PostgreSQL cluster made up of a single primary and an optional number of replicas that co-exist in a chosen Kubernetes namespace. + +Currently only PostgreSQL 12 is supported. + +## Requirements + +Kubernetes 1.15 or higher, tested on AWS, Google, Azure (with multiple availability zones). + +## Main features + +* Self-Healing capability, through: + * failover of the primary instance, by promoting the most aligned replica + * automated recreation of a replica +* Planned switchover of the primary instance, by promoting a selected replica +* Scale up/down capabilities +* Definition of an arbitrary number of instances (minimum 1 - one primary server) +* Definition of the *read-write* service, to connect your applications to the only primary server of the cluster +* Definition of the *read-only* service, to connect your applications to any of the instances for read workloads +* Support for Local Persistent Volumes with PVC templates +* Standard output logging of PostgreSQL error messages + +## About this guide + +Follow the instructions in the ["Quickstart"](quickstart.md) to test Cloud Native PostgreSQL +on a local Kubernetes cluster using Minikube or Kind. + +In case you are not familiar with some basic terminology on Kubernetes and PostgreSQL, +please consult the ["Before you start" section](before_you_start.md). diff --git a/docs/src/quickstart.md b/docs/src/quickstart.md new file mode 100644 index 0000000000..2bc8bb1d71 --- /dev/null +++ b/docs/src/quickstart.md @@ -0,0 +1,154 @@ +This section describes how to test a PostgreSQL cluster on your laptop/computer, +using a local Kubernetes cluster in +[Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) or +[Kind](https://kind.sigs.k8s.io/) via Cloud Native PostgreSQL. +Like any other Kubernetes application, Cloud Native PostgreSQL is deployed using +regular manifests written in YAML. + +!!! Warning + The instructions contained in this section are for demonstration, + testing and practice purposes only and must not be used in production. + +Cloud Native PostgreSQL has been tested on two widespread tools for running +Kubernetes locally, available on major platforms such as Linux, Mac OS X +and Windows: + +- [Minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) +- [Kind](https://kind.sigs.k8s.io/) + +By following the instructions in this page you should be able to start a PostgreSQL +cluster on your local Kubernetes installation and experiment with it. + +!!! Important + Make sure that you have `kubectl` installed on your machine in order + to connect to the Kubernetes cluster. + +## Part 1 - Setup the local Kubernetes playground + +The first part is about installing Minikube and/or Kind. Please spend some time +reading about which of the two systems proceed with. Once you have setup one or the +other, please proceed with part 2. + +### Minikube + +Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a +single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for +users looking to try out Kubernetes or develop with it day-to-day. Normally, it +is used in conjunction with VirtualBox. + +You can find more information in the official [Kubernetes documentation on how to +install Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube) in your personal local environment. +When you installed it run the following command to create minikube cluster: + +```sh +minikube start +``` + +This will create the Kubernetes cluster and you will be ready to use it. +Verify that it works with the following command: + +```sh +kubectl get nodes +``` + +You will see one node called `minikube`. + +### Kind + +If you do not want to use a virtual machine hypervisor, then Kind is a tool for running +local Kubernetes clusters using Docker container "nodes" (Kind stands for "Kubernetes IN Docker" indeed). + +Install `kind` on your environment following the instructions in the [Quickstart](https://kind.sigs.k8s.io/docs/user/quick-start), +then create a Kubernetes cluster with: + +```sh +kind create cluster --name pg +``` + +## Part 2 - Install Cloud Native PostgreSQL + +Now that you have a Kubernetes installation up and running on your laptop, +you can proceed with Cloud Native PostgreSQL installation. + +Locate the latest release of Cloud Native PostgreSQL from the +["Cloud Native PostgreSQL" page available in the 2ndQuadrant Portal](https://access.2ndquadrant.com/customer_portal/sw/cloud-native-postgresql/). +Follow the installation instructions and run the `kubectl` command that you are presented. + +!!! Important + Please contact your 2ndQuadrant account manager if you do not have access to the Kubernetes manifests of Cloud Native PostgreSQL. + +Once you have run the `kubectl` command, Cloud Native PostgreSQL will be installed in your Kubernetes cluster. +You can verify that with: + +```sh +kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager +``` + + +## Part 3 - Deploy a PostgreSQL cluster + +As with any other deployment in Kubernetes, in order to deploy a PostgreSQL cluster +you need to apply a configuration file that defines your desired `Cluster`. + +The [`cluster-emptydir.yaml`](samples/cluster-emptydir.yaml) sample file +defines a simple `Cluster` with an `emptyDir` local volume: + +```yaml +# Example of PostgreSQL cluster using emptyDir volumes +apiVersion: postgresql.k8s.2ndq.io/v1alpha1 +kind: Cluster +metadata: + name: postgresql-emptydir +spec: + instances: 3 + + # Configuration of the application that will be used by + # this PostgreSQL cluster + applicationConfiguration: + database: app + owner: app + + # PostgreSQL server configuration + postgresql: + # Example of configuration parameters for PostgreSQL + parameters: + - max_worker_processes = 20 + - max_parallel_workers = 20 + - max_replication_slots = 20 + - hot_standby = true + - wal_keep_segments = 8 + + # Example of host based authentication directives + pg_hba: + # Grant local access + - local all all trust + # Grant local network access (within k8s cluster) + - host all all 10.0.0.0/8 trust + - host all all 172.0.0.0/8 trust + # Grant local network replication access (within k8s cluster) + - host replication all 10.0.0.0/8 trust + - host replication all 172.0.0.0/8 trust + # Require md5 authentication elsewhere + - host all all all md5 + - host replication all all md5 +``` + +This will create a `Cluster` called `cluster-emptydir` with a PostgreSQL +primary, two replicas, and a database called `app` owned by the `app` PostgreSQL user. + +!!! Note "There's more" + For more detailed information about the available options, please refer + to the ["Custom Resource Definitions" section](crd.md). + +In order to create the 3-node PostgreSQL cluster, you need to run the following command: + +```sh +kubectl apply -f cluster-emptydir.yaml +``` + +You can check that the pods are being created with the `get pods` command: + +```sh +kubectl get pods +``` + diff --git a/docs/src/samples.md b/docs/src/samples.md new file mode 100644 index 0000000000..9b1d7c37f5 --- /dev/null +++ b/docs/src/samples.md @@ -0,0 +1,11 @@ +In this section you can find some examples of configuration files to setup your PostgreSQL `Cluster`. + +* [`cluster-emptydir.yaml`](samples/cluster-emptydir.yaml): + basic example of `Cluster` that uses `emptyDir` local storage. For demonstration and experimentation purposes + on a personal Kubernetes cluster with Minikube or Kind as described in the ["Quickstart"](quickstart.md). +* [`cluster-storage-class.yaml`](samples/cluster-storage-class.yaml): + basic example of `Cluster` that uses a specified storage class. +* [`cluster-pvc-template.yaml`](samples/cluster-pvc-template.yaml): + basic example of `Cluster` that uses a persistent volume claim template. + +For a list of available options, please refer to the ["Custom Resource Definitions" page](crd.md).