Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes in style and more on CDK #5292

Merged
merged 3 commits into from
Oct 3, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 49 additions & 48 deletions docs/getting-started-guides/ubuntu/backups.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,27 +3,59 @@ title: Backups
---

{% capture overview %}
This page shows you how to backup and restore data from the different deployed services in a given cluster.
The state of a Kubernetes cluster is kept in the etcd datastore.
This page shows how to backup and restore the etcd shipped with
the Canonical Distribution of Kubernetes. Backing up application specific data,
normally stored in a persistent volume, is outside the scope of this
document.
{% endcapture %}

{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}

{% capture steps %}
## Exporting cluster data
## Snapshot etcd data

Exporting of cluster data is not supported at this time.
The `snapshot` action of the etcd charm allows the operator to snapshot
a running cluster's data for use in cloning,
backing up, or migrating to a new cluster.

## Restoring cluster data
juju run-action etcd/0 snapshot target=/mnt/etcd-backups

Importing of cluster data is not supported at this time.
- **param** target: destination directory to save the resulting snapshot archive.

## Exporting etcd data

Migrating etcd is a fairly easy task.
## Restore etcd data

Step 1: Snapshot your existing cluster. This is encapsulated in the `snapshot`
The etcd charm is capable of restoring its data from a cluster-data snapshot
via the `restore` action.
This comes with caveats and a very specific path to restore a cluster:
The cluster must be in a state of only having a single member. So it's best to
deploy a new cluster using the etcd charm, without adding any additional units.

```
juju deploy etcd new-etcd
```

The above code snippet will deploy a single unit of etcd, as 'new-etcd'

```
juju run-action etcd/0 restore target=/mnt/etcd-backups
```

Once the restore action has completed, evaluate the cluster health. If the unit
is healthy, you may resume scaling the application to meet your needs.

- **param** target: destination directory to save the existing data.

- **param** skip-backup: Don't backup any existing data.


## Migrating an etcd cluster
Using the above snapshot and restore operations, migrating etcd is a fairly easy task.

**Step 1:** Snapshot your existing cluster. This is encapsulated in the `snapshot`
action.

```
Expand All @@ -36,7 +68,7 @@ Results:
Action queued with id: b46d5d6f-5625-4320-8cda-b611c6ae580c
```

Step 2: Check the status of the action so you can grab the snapshot and verify
**Step 2:** Check the status of the action so you can grab the snapshot and verify
the sum. The copy.cmd result output is a copy/paste command for you to download
the exact snapshot that you just created.

Expand Down Expand Up @@ -68,67 +100,36 @@ juju scp etcd/0:/home/ubuntu/etcd-snapshots/etcd-snapshot-2016-11-09-02.41.47.ta
sha256sum etcd-snapshot-2016-11-09-02.41.47.tar.gz
```

Step 3: Deploy the new cluster leader, and attach the snapshot:
**Step 3:** Deploy the new cluster leader, and attach the snapshot:

```
juju deploy etcd new-etcd --resource snapshot=./etcd-snapshot-2016-11-09-02.41.47.tar.gz
```

Step 4: Re-Initialize the master with the data from the resource we just attached
**Step 4:** Reinitialize the master with the data from the resource we just attached
in step 3.

```
juju run-action new-etcd/0 restore
```

## Restoring etcd data

Allows the operator to restore the data from a cluster-data snapshot. This
comes with caveats and a very specific path to restore a cluster:

The cluster must be in a state of only having a single member. So it's best to
deploy a new cluster using the etcd charm, without adding any additional units.

```
juju deploy etcd new-etcd
```

> The above code snippet will deploy a single unit of etcd, as 'new-etcd'

```
juju run-action etcd/0 restore target=/mnt/etcd-backups
```

Once the restore action has completed, evaluate the cluster health. If the unit
is healthy, you may resume scaling the application to meet your needs.

- **param** target: destination directory to save the existing data.

- **param** skip-backup: Don't backup any existing data.

## Snapshot etcd data

Allows the operator to snapshot a running clusters data for use in cloning,
backing up, or migrating Etcd clusters.

juju run-action etcd/0 snapshot target=/mnt/etcd-backups

- **param** target: destination directory to save the resulting snapshot archive.
{% endcapture %}

{% capture discussion %}
# Known Limitations
## Known Limitations

#### Loss of PKI warning

If you destroy the leader - identified with the `*` text next to the unit number in status:
all TLS pki will be lost. No PKI migration occurs outside
of the units requesting and registering the certificates.

> Important: Mismanaging this configuration will result in locking yourself
> out of the cluster, and can potentially break existing deployments in very
> strange ways relating to x509 validation of certificates, which affects both
> servers and clients.
**Caution:** Mismanaging this configuration will result in locking yourself
out of the cluster, and can potentially break existing deployments in very
strange ways relating to x509 validation of certificates, which affects both
servers and clients.
{: .caution}

#### Restoring from snapshot on a scaled cluster

Expand Down
8 changes: 6 additions & 2 deletions docs/getting-started-guides/ubuntu/decommissioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,18 @@ title: Decommissioning
This page shows you how to properly decommission a cluster.
{% endcapture %}

Warning: By the time you've reached this step you should have backed up your workloads and pertinent data, this section is for the complete destruction of a cluster.

{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.

**Warning:** By the time you've reached this step you should have backed up your workloads and pertinent data; this section is for the complete destruction of a cluster.
{. warning}

{% endcapture %}

{% capture steps %}
It is recommended to deploy individual Kubernetes clusters in their own models, so that there is a clean separation between environments. To remove a cluster first find out which model it's in with `juju list-models`. The controller reserves an `admin` model for itself. If you have chosen to not name your model it might show up as `default`.
## Destroy the Juju model
It is recommended to deploy individual Kubernetes clusters in their own models, so that there is a clean separation between environments. To remove a cluster first find out which model it's in with `juju list-models`. The controller reserves an `admin` model for itself. If you have chosen to not name your model it might show up as `default`.

```
$ juju list-models
Expand Down
19 changes: 7 additions & 12 deletions docs/getting-started-guides/ubuntu/glossary.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,23 +6,18 @@ title: Glossary and Terminology
This page explains some of the terminology used in deploying Kubernetes with Juju.
{% endcapture %}

{% capture prerequisites %}
This page assumes you have a working Juju deployed cluster.
{% endcapture %}


{% capture body %}

{% capture steps %}

controller - The management node of a cloud environment. Typically you have one controller per cloud region, or more in HA environments. The controller is responsible for managing all subsequent models in a given environment. It contains the Juju API server and its underlying database.
**controller** - The management node of a cloud environment. Typically you have one controller per cloud region, or more in HA environments. The controller is responsible for managing all subsequent models in a given environment. It contains the Juju API server and its underlying database.

model - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to separate Kubernetes clusters into individual models for management and isolation reasons.
**model** - A collection of charms and their relationships that define a deployment. This includes machines and units. A controller can host multiple models. It is recommended to separate Kubernetes clusters into individual models for management and isolation reasons.

charm - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`.
**charm** - The definition of a service, including its metadata, dependencies with other services, required packages, and application management logic. It contains all the operational knowledge of deploying a Kubernetes cluster. Included charm examples are `kubernetes-core`, `easy-rsa`, `kibana`, and `etcd`.

unit - A given instance of a service. These may or may not use up a whole machine, and may be colocated on the same machine. So for example you might have a `kubernetes-worker`, and `filebeat`, and `topbeat` units running on a single machine, but they are three distinct units of different services.
**unit** - A given instance of a service. These may or may not use up a whole machine, and may be colocated on the same machine. So for example you might have a `kubernetes-worker`, and `filebeat`, and `topbeat` units running on a single machine, but they are three distinct units of different services.

machine - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud.
**machine** - A physical node, these can either be bare metal nodes, or virtual machines provided by a cloud.
{% endcapture %}

{% include templates/task.md %}
{% include templates/concept.md %}
Loading