diff --git a/_data/toc.yaml b/_data/toc.yaml index d6804e68085..c97832b2db7 100644 --- a/_data/toc.yaml +++ b/_data/toc.yaml @@ -2373,8 +2373,18 @@ manuals: title: Troubleshoot with logs - path: /datacenter/dtr/2.5/guides/admin/monitor-and-troubleshoot/troubleshoot-batch-jobs/ title: Troubleshoot batch jobs - - path: /datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery/ - title: Backups and disaster recovery + - sectiontitle: Disaster recovery + section: + - title: Overview + path: /datacenter/dtr/2.5/guides/admin/disaster-recovery/ + - title: Repair a single replica + path: /datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-single-replica/ + - title: Repair a cluster + path: /datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-cluster/ + - title: Create a backup + path: /datacenter/dtr/2.5/guides/admin/disaster-recovery/create-a-backup/ + - title: Restore from a backup + path: /datacenter/dtr/2.5/guides/admin/disaster-recovery/restore-from-backup/ - sectiontitle: User guides section: - sectiontitle: Access DTR diff --git a/datacenter/dtr/2.5/guides/admin/configure/use-a-load-balancer.md b/datacenter/dtr/2.5/guides/admin/configure/use-a-load-balancer.md index 00378fd246c..cb4a30dbcb6 100644 --- a/datacenter/dtr/2.5/guides/admin/configure/use-a-load-balancer.md +++ b/datacenter/dtr/2.5/guides/admin/configure/use-a-load-balancer.md @@ -267,5 +267,4 @@ docker run --detach \ ## Where to go next -* [Backups and disaster recovery](../backups-and-disaster-recovery.md) -* [Monitor and troubleshoot](../monitor-and-troubleshoot/index.md) +* [DTR architecture](../../architecture.md) diff --git a/datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery.md b/datacenter/dtr/2.5/guides/admin/disaster-recovery/create-a-backup.md similarity index 58% rename from datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery.md rename to datacenter/dtr/2.5/guides/admin/disaster-recovery/create-a-backup.md index 3514ce5794e..3e318d0a97e 100644 --- a/datacenter/dtr/2.5/guides/admin/backups-and-disaster-recovery.md +++ b/datacenter/dtr/2.5/guides/admin/disaster-recovery/create-a-backup.md @@ -1,17 +1,12 @@ --- -title: DTR backups and recovery -description: Learn how to back up your Docker Trusted Registry cluster, and to recover your cluster from an existing backup. -keywords: registry, high-availability, backup, recovery +title: Create a backup +description: Learn how to create a backup of Docker Trusted Registry, for disaster recovery. +keywords: dtr, disaster recovery --- -{% assign image_backup_file = "backup-images.tar" %} -{% assign metadata_backup_file = "backup-metadata.tar" %} +{% assign metadata_backup_file = "dtr-metadata-backup.tar" %} +{% assign image_backup_file = "dtr-image-backup.tar" %} -DTR requires that a majority (n/2 + 1) of its replicas are healthy at all times -for it to work. So if a majority of replicas is unhealthy or lost, the only -way to restore DTR to a working state, is by recovering from a backup. This -is why it's important to ensure replicas are healthy and perform frequent -backups. ## Data managed by DTR @@ -66,8 +61,8 @@ you can backup the images by using ssh to log into a node where DTR is running, and creating a tar archive of the [dtr-registry volume](../architecture.md): ```none -{% raw %} sudo tar -cf {{ image_backup_file }} \ +{% raw %} $(dirname $(docker volume inspect --format '{{.Mountpoint}}' dtr-registry-)) {% endraw %} ``` @@ -89,26 +84,32 @@ docker run --log-driver none -i --rm \ --ucp-url \ --ucp-insecure-tls \ --ucp-username \ - --existing-replica-id > backup-metadata.tar + --existing-replica-id > {{ metadata_backup_file }} ``` Where: -* `` is the url you use to access UCP -* `` is the username of a UCP administrator -* `` is the id of the DTR replica to backup - +* `` is the url you use to access UCP. +* `` is the username of a UCP administrator. +* `` is the id of the DTR replica to backup. This prompts you for the UCP password, backups up the DTR metadata and saves the result into a tar archive. You can learn more about the supported flags in -the [reference documentation](/reference/dtr/2.5/cli/backup.md). +the [reference documentation](../../reference/cli/backup.md). -The backup command doesn't stop DTR, so that you can take frequent backups -without affecting your users. Also, the backup contains sensitive information +By default the backup command doesn't stop the DTR replica being backed up. +This allows performing backups without affecting your users. Since the replica +is not stopped, it's possible that happen while the backup is taking place, won't +be persisted. + +You can use the `--offline-backup` option to stop the DTR replica while taking +the backup. If you do this, remove the replica from the load balancing pool. + +Also, the backup contains sensitive information like private keys, so you can encrypt the backup by running: ```none -gpg --symmetric {{ backup-metadata.tar }} +gpg --symmetric {{ metadata_backup_file }} ``` This prompts you for a password to encrypt the backup, copies the backup file @@ -120,7 +121,7 @@ To validate that the backup was correctly performed, you can print the contents of the tar file created. The backup of the images should look like: ```none -tar -tf {{ image_backup_file }} +tar -tf {{ metadata_backup_file }} dtr-backup-v{{ page.dtr_version }}/ dtr-backup-v{{ page.dtr_version }}/rethink/ @@ -130,7 +131,7 @@ dtr-backup-v{{ page.dtr_version }}/rethink/layers/ And the backup of the DTR metadata should look like: ```none -tar -tf {{ backup-metadata.tar }} +tar -tf {{ metadata_backup_file }} # The archive should look like this dtr-backup-v{{ page.dtr_version }}/ @@ -142,96 +143,9 @@ dtr-backup-v{{ page.dtr_version }}/rethink/properties/0 If you've encrypted the metadata backup, you can use: ```none -gpg -d /tmp/backup.tar.gpg | tar -t +gpg -d {{ metadata_backup_file }} | tar -t ``` You can also create a backup of a UCP cluster and restore it into a new cluster. Then restore DTR on that new cluster to confirm that everything is working as expected. - -## Restore DTR data - -If your DTR has a majority of unhealthy replicas, the one way to restore it to -a working state is by restoring from an existing backup. - -To restore DTR, you need to: - -1. Stop any DTR containers that might be running -2. Restore the images from a backup -3. Restore DTR metadata from a backup -4. Re-fetch the vulnerability database - -You need to restore DTR on the same UCP cluster where you've created the -backup. If you restore on a different UCP cluster, all DTR resources will be -owned by users that don't exist, so you'll not be able to manage the resources, -even though they're stored in the DTR data store. - -When restoring, you need to use the same version of the `docker/dtr` image -that you've used when creating the update. Other versions are not guaranteed -to work. - -### Stop DTR containers - -Start by removing any DTR container that is still running: - -```none -docker run -it --rm \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} destroy \ - --ucp-insecure-tls -``` - -### Restore images - -If you had DTR configured to store images on the local filesystem, you can -extract your backup: - -```none -sudo tar -xzf {{ image_backup_file }} -C /var/lib/docker/volumes -``` - -If you're using a different storage backend, follow the best practices -recommended for that system. When restoring the DTR metadata, DTR will be -deployed with the same configurations it had when creating the backup. - - -### Restore DTR metadata - -You can restore the DTR metadata with the `docker/dtr restore` command. This -performs a fresh installation of DTR, and reconfigures it with -the configuration created during a backup. - -Load your UCP client bundle, and run the following command, replacing the -placeholders for the real values: - -```none -read -sp 'ucp password: ' UCP_PASSWORD; \ -docker run -i --rm \ - --env UCP_PASSWORD=$UCP_PASSWORD \ - {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} restore \ - --ucp-url \ - --ucp-insecure-tls \ - --ucp-username \ - --ucp-node \ - --replica-id \ - --dtr-external-url < {{ metadata_backup_file }} -``` - -Where: - -* `` is the url you use to access UCP -* `` is the username of a UCP administrator -* `` is the hostname of the node where you've restored the images -* `` the id of the replica you backed up -* ``the url that clients use to access DTR - -### Re-fetch the vulnerability database - -If you're scanning images, you now need to download the vulnerability database. - -After you successfully restore DTR, you can join new replicas the same way you -would after a fresh installation. [Learn more](configure/set-up-vulnerability-scans.md). - -## Where to go next - -* [Set up high availability](configure/set-up-high-availability.md) -* [DTR architecture](../architecture.md) diff --git a/datacenter/dtr/2.5/guides/admin/disaster-recovery/index.md b/datacenter/dtr/2.5/guides/admin/disaster-recovery/index.md new file mode 100644 index 00000000000..616618604fe --- /dev/null +++ b/datacenter/dtr/2.5/guides/admin/disaster-recovery/index.md @@ -0,0 +1,58 @@ +--- +title: DTR disaster recovery overview +description: Learn the multiple disaster recovery strategies you can use with + Docker Trusted Registry. +keywords: dtr, disaster recovery +--- + +Docker Trusted Registry is a clustered application. You can join multiple +replicas for high availability. +For a DTR cluster to be healthy, a majority of its replicas (n/2 + 1) need to +be healthy and be able to communicate with the other replicas. This is also +known as maintaining quorum. + +This means that there are three failure scenarios possible. + +## Replica is unhealthy but cluster maintains quorum + +One or more replicas are unhealthy, but the overall majority (n/2 + 1) is still +healthy and able to communicate with one another. + +![Failure scenario 1](../../images/dr-overview-1.svg) + +In this example the DTR cluster has five replicas but one of the nodes stopped +working, and the other has problems with the DTR overlay network. +Even though these two replicas are unhealthy the DTR cluster has a majority +of replicas still working, which means that the cluster is healthy. + +In this case you should repair the unhealthy replicas, or remove them from +the cluster and join new ones. + +[Learn how to repair a replica](repair-a-single-replica.md). + +## The majority of replicas are unhealthy + +A majority of replicas are unhealthy, making the cluster lose quorum, but at +least one replica is still healthy, or at least the data volumes for DTR are +accessible from that replica. + +![Failure scenario 2](../../images/dr-overview-2.svg) + +In this example the DTR cluster is unhealthy but since one replica is still +running it's possible to repair the cluster without having to restore from +a backup. This minimizes the amount of data loss. + +[Learn how to do an emergency repair](repair-a-cluster.md). + +## All replicas are unhealthy + +This is a total disaster scenario where all DTR replicas were lost, causing +the data volumes for all DTR replicas to get corrupted or lost. + +![Failure scenario 3](../../images/dr-overview-3.svg) + +In a disaster scenario like this, you'll have to restore DTR from an existing +backup. Restoring from a backup should be only used as a last resort, since +doing an emergency repair might prevent some data loss. + +[Learn how to restore from a backup](restore-from-backup.md). diff --git a/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-cluster.md b/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-cluster.md new file mode 100644 index 00000000000..f715285a8dc --- /dev/null +++ b/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-cluster.md @@ -0,0 +1,81 @@ +--- +title: Repair a cluster +description: Learn how to repair DTR when the majority of replicas are unhealthy. +keywords: dtr, disaster recovery +--- + +For a DTR cluster to be healthy, a majority of its replicas (n/2 + 1) need to +be healthy and be able to communicate with the other replicas. This is known +as maintaining quorum. + +In a scenario where quorum is lost, but at least one replica is still +accessible, you can use that replica to repair the cluster. That replica doesn't +need to be completely healthy. The cluster can still be repaired as the DTR +data volumes are persisted and accessible. + +![Unhealthy cluster](../../images/repair-cluster-1.svg) + +Repairing the cluster from an existing replica minimizes the amount of data lost. +If this procedure doesn't work, you'll have to +[restore from an existing backup](restore-from-backup.md). + +## Diagnose an unhealthy cluster + +When a majority of replicas are unhealthy, causing the overall DTR cluster to +become unhealthy, operations like `docker login`, `docker pull`, and `docker push` +present `internal server error`. + +Accessing the `/_ping` endpoint of any replica also returns the same error. +It's also possible that the DTR web UI is partially or fully unresponsive. + +## Perform an emergency repair + +Use the `docker/dtr emergency-repair` command to try to repair an unhealthy +DTR cluster, from an existing replica. + +This command checks the data volumes for the DTR + +This command checks the data volumes for the DTR replica are uncorrupted, +redeploys all internal DTR components and reconfigured them to use the existing +volumes. + +It also reconfigures DTR removing all other nodes from the cluster, leaving DTR +as a single-replica cluster with the replica you chose. + +Start by finding the ID of the DTR replica that you want to repair from. +You can find the list of replicas by navigating to the UCP web UI, or by using +a UCP client bundle to run: + +``` +{% raw %} +docker ps --format "{{.Names}}" | grep dtr + +# The list of DTR containers with /-, e.g. +# node-1/dtr-api-a1640e1c15b6 +{% endraw %} +``` + +Then, use your UCP client bundle to run the emergency repair command: + +``` +{% raw %} +docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} emergency-repair \ + --ucp-insecure-tls \ + --existing-replica-id +{% endraw %} +``` + +If the emergency repair procedure is successful, your DTR cluster now has a +single replica. You should now +[join more replicas for high availability](../configure/set-up-high-availability.md). + +![Healthy cluster](../../images/repair-cluster-2.svg) + +If the emergency repair command fails, try running it again using a different +replica ID. As a last resort, you can restore your cluster from an existing +backup. + +## Where to go next + +* [Create a backup](create-a-backup.md) +* [Restore from an existing backup](restore-from-backup.md) diff --git a/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-single-replica.md b/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-single-replica.md new file mode 100644 index 00000000000..c27a7d244af --- /dev/null +++ b/datacenter/dtr/2.5/guides/admin/disaster-recovery/repair-a-single-replica.md @@ -0,0 +1,105 @@ +--- +title: Repair a single replica +description: Learn how to repair a single DTR replica when it is unhealthy. +keywords: dtr, disaster recovery +--- + +When one or more DTR replicas are unhealthy but the overall majority +(n/2 + 1) is healthy and able to communicate with one another, your DTR +cluster is still functional and healthy. + +![Cluster with two nodes unhealthy](../../images/repair-replica-1.svg) + +Given that the DTR cluster is healthy, there's no need to execute any disaster +recovery procedures like restoring from a backup. + +Instead, you should: + +1. Remove the unhealthy replicas from the DTR cluster. +2. Join new replicas to make DTR highly available. + +Since a DTR cluster requires a majority of replicas to be healthy at all times, +the order of these operations is important. If you join more replicas before +removing the ones that are unhealthy, your DTR cluster might become unhealthy. + +## Split-brain scenario + +To understand why you should remove unhealthy replicas before joining new ones, +imagine you have a five-replica DTR deployment, and something goes wrong with +the overlay network connection the replicas, causing them to be separated in +two groups. + +![Cluster with network problem](../../images/repair-replica-2.svg) + +Because the cluster originally had five replicas, it can work as long as +three replicas are still healthy and able to communicate (5 / 2 + 1 = 3). +Even though the network separated the replicas in two groups, DTR is still +healthy. + +If at this point you join a new replica instead of fixing the network problem +or removing the two replicas that got isolated from the rest, it's possible +that the new replica ends up in the side of the network partition that has +less replicas. + +![cluster with split brain](../../images/repair-replica-3.svg) + +When this happens, both groups now have the minimum amount of replicas needed +to establish a cluster. This is also known as a split-brain scenario, because +both groups can now accept writes and their histories start diverging, making +the two groups effectively two different clusters. + +## Remove replicas + +To remove unhealthy replicas, you'll first have to find the replica ID +of one of the replicas you want to keep, and the replica IDs of the unhealthy +replicas you want to remove. + +You can find this in the **Stacks** page of the UCP web UI, or by using the UCP +client bundle to run: + +``` +{% raw %} +docker ps --format "{{.Names}}" | grep dtr + +# The list of DTR containers with /-, e.g. +# node-1/dtr-api-a1640e1c15b6 +{% endraw %} +``` + +Then use the UCP client bundle to remove the unhealthy replicas: + +```bash +docker run -it --rm {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} remove \ + --existing-replica-id \ + --replica-ids \ + --ucp-insecure-tls \ + --ucp-url \ + --ucp-username \ + --ucp-password +``` + +You can remove more than one replica at the same time, by specifying multiple +IDs with a comma. + +![Healthy cluster](../../images/repair-replica-4.svg) + +## Join replicas + +Once you've removed the unhealthy nodes from the cluster, you should join new +ones to make sure your cluster is highly available. + +Use your UCP client bundle to run the following command which prompts you for +the necessary parameters: + +```bash +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} join \ + --ucp-node \ + --ucp-insecure-tls +``` + +[Learn more about high availability](../configure/set-up-high-availability.md). + +## Where to go next + +* [Disaster recovery overview](index.md) diff --git a/datacenter/dtr/2.5/guides/admin/disaster-recovery/restore-from-backup.md b/datacenter/dtr/2.5/guides/admin/disaster-recovery/restore-from-backup.md new file mode 100644 index 00000000000..e0940cb5f7d --- /dev/null +++ b/datacenter/dtr/2.5/guides/admin/disaster-recovery/restore-from-backup.md @@ -0,0 +1,88 @@ +--- +title: Restore from a backup +description: Learn how to restore a DTR cluster from an existing backup +keywords: dtr, disaster recovery +--- + +{% assign metadata_backup_file = "dtr-metadata-backup.tar" %} +{% assign image_backup_file = "dtr-image-backup.tar" %} + +## Restore DTR data + +If your DTR has a majority of unhealthy replicas, the one way to restore it to +a working state is by restoring from an existing backup. + +To restore DTR, you need to: + +1. Stop any DTR containers that might be running +2. Restore the images from a backup +3. Restore DTR metadata from a backup +4. Re-fetch the vulnerability database + +You need to restore DTR on the same UCP cluster where you've created the +backup. If you restore on a different UCP cluster, all DTR resources will be +owned by users that don't exist, so you'll not be able to manage the resources, +even though they're stored in the DTR data store. + +When restoring, you need to use the same version of the `docker/dtr` image +that you've used when creating the update. Other versions are not guaranteed +to work. + +### Remove DTR containers + +Start by removing any DTR container that is still running: + +```none +docker run -it --rm \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} destroy \ + --ucp-insecure-tls +``` + +### Restore images + +If you had DTR configured to store images on the local filesystem, you can +extract your backup: + +```none +sudo tar -xzf {{ image_backup_file }} -C /var/lib/docker/volumes +``` + +If you're using a different storage backend, follow the best practices +recommended for that system. + +### Restore DTR metadata + +You can restore the DTR metadata with the `docker/dtr restore` command. This +performs a fresh installation of DTR, and reconfigures it with +the configuration created during a backup. + +Load your UCP client bundle, and run the following command, replacing the +placeholders for the real values: + +```none +read -sp 'ucp password: ' UCP_PASSWORD; \ +docker run -i --rm \ + --env UCP_PASSWORD=$UCP_PASSWORD \ + {{ page.dtr_org }}/{{ page.dtr_repo }}:{{ page.dtr_version }} restore \ + --ucp-url \ + --ucp-insecure-tls \ + --ucp-username \ + --ucp-node \ + --replica-id \ + --dtr-external-url < {{ metadata_backup_file }} +``` + +Where: + +* `` is the url you use to access UCP +* `` is the username of a UCP administrator +* `` is the hostname of the node where you've restored the images +* `` the id of the replica you backed up +* ``the url that clients use to access DTR + +### Re-fetch the vulnerability database + +If you're scanning images, you now need to download the vulnerability database. + +After you successfully restore DTR, you can join new replicas the same way you +would after a fresh installation. [Learn more](configure/set-up-vulnerability-scans.md). diff --git a/datacenter/dtr/2.5/guides/admin/upgrade.md b/datacenter/dtr/2.5/guides/admin/upgrade.md index 0168dd9c743..1158f906706 100644 --- a/datacenter/dtr/2.5/guides/admin/upgrade.md +++ b/datacenter/dtr/2.5/guides/admin/upgrade.md @@ -42,7 +42,7 @@ to ensure the impact on your business is close to none. Before starting your upgrade, make sure that: * The version of UCP you are using is supported by the version of DTR you are trying to upgrade to. [Check the compatibility matrix](https://success.docker.com/Policies/Compatibility_Matrix). -* You have a recent [DTR backup](backups-and-disaster-recovery.md). +* You have a recent [DTR backup](disaster-recovery/create-a-backup.md). * You [disable Docker content trust in UCP](/datacenter/ucp/2.2/guides/admin/configure/run-only-the-images-you-trust.md). ### Step 1. Upgrade DTR to {{ previous_version }} if necessary diff --git a/datacenter/dtr/2.5/guides/images/dr-overview-1.svg b/datacenter/dtr/2.5/guides/images/dr-overview-1.svg new file mode 100644 index 00000000000..1f447c8cea4 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/dr-overview-1.svg @@ -0,0 +1,151 @@ + + + + dr-overview-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/dr-overview-2.svg b/datacenter/dtr/2.5/guides/images/dr-overview-2.svg new file mode 100644 index 00000000000..2785f30a158 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/dr-overview-2.svg @@ -0,0 +1,163 @@ + + + + dr-overview-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/dr-overview-3.svg b/datacenter/dtr/2.5/guides/images/dr-overview-3.svg new file mode 100644 index 00000000000..b8131a27d69 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/dr-overview-3.svg @@ -0,0 +1,166 @@ + + + + dr-overview-3 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-cluster-1.svg b/datacenter/dtr/2.5/guides/images/repair-cluster-1.svg new file mode 100644 index 00000000000..ad835ae4e6e --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-cluster-1.svg @@ -0,0 +1,163 @@ + + + + repair-cluster-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-cluster-2.svg b/datacenter/dtr/2.5/guides/images/repair-cluster-2.svg new file mode 100644 index 00000000000..e3e7a788d10 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-cluster-2.svg @@ -0,0 +1,128 @@ + + + + repair-cluster-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-replica-1.svg b/datacenter/dtr/2.5/guides/images/repair-replica-1.svg new file mode 100644 index 00000000000..fa72b649825 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-replica-1.svg @@ -0,0 +1,153 @@ + + + + repair-replica-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + ! + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-replica-2.svg b/datacenter/dtr/2.5/guides/images/repair-replica-2.svg new file mode 100644 index 00000000000..5ed6299bf06 --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-replica-2.svg @@ -0,0 +1,155 @@ + + + + repair-replica-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-replica-3.svg b/datacenter/dtr/2.5/guides/images/repair-replica-3.svg new file mode 100644 index 00000000000..a8cab06a4af --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-replica-3.svg @@ -0,0 +1,170 @@ + + + + repair-replica-3 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-replica-4.svg b/datacenter/dtr/2.5/guides/images/repair-replica-4.svg new file mode 100644 index 00000000000..2ea0cb6b5ee --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-replica-4.svg @@ -0,0 +1,136 @@ + + + + repair-replica-4 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + worker node + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-single-replica-1.svg b/datacenter/dtr/2.5/guides/images/repair-single-replica-1.svg new file mode 100644 index 00000000000..3f1aaed794e --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-single-replica-1.svg @@ -0,0 +1,148 @@ + + + + repair-single-replica-1 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file diff --git a/datacenter/dtr/2.5/guides/images/repair-single-replica-2.svg b/datacenter/dtr/2.5/guides/images/repair-single-replica-2.svg new file mode 100644 index 00000000000..72ffb3f4aeb --- /dev/null +++ b/datacenter/dtr/2.5/guides/images/repair-single-replica-2.svg @@ -0,0 +1,163 @@ + + + + repair-single-replica-2 + Created with Sketch. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + + + + + + + + worker node + + + + + + + DTR + + + + + + + + + + \ No newline at end of file