Skip to content

Commit 5475022

Browse files
chenxu1990mergify[bot]
authored andcommitted
Document about stale resource cleanup
1. when user delete pv manual, it will result in stale metadata and image in ceph
1 parent 34fc1d8 commit 5475022

File tree

2 files changed

+100
-0
lines changed

2 files changed

+100
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ Independent CSI plugins are provided to support RBD and CephFS backed volumes,
3636
for CephFS plugin configuration and deployment please
3737
refer [cephfs doc](https://github.com/ceph/ceph-csi/blob/master/docs/deploy-cephfs.md).
3838
- For example usage of RBD and CephFS CSI plugins, see examples in `examples/`.
39+
- Stale resource cleanup, please refer [cleanup doc](docs/resource-cleanup.md).
3940

4041
NOTE:
4142

docs/resource-cleanup.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# Stale Resource Cleanup
2+
3+
If the PVC is created with storage class which is having the `reclaimPolicy`
4+
as `Retain` will not delete the PV object, backend omap metadata and backend image.
5+
Manual deletion of PV will result in stale omap keys, values,
6+
cephfs subvolume and rbd image.
7+
It is required to cleanup metadata and image separately.
8+
9+
## Steps
10+
11+
### 1. Get PV name from PVC
12+
13+
a. get pv_name
14+
15+
`[$]kubectl get pvc pvc_name -n namespace -owide`
16+
17+
```bash
18+
$ kubectl get pvc mysql-pvc -owide -n prometheus
19+
NAME STATUS VOLUME
20+
mysql-pvc Bound pvc-bc537af8-67fc-4963-99c4-f40b3401686a
21+
22+
CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
23+
20Gi RWO csi-rbd 14d Filesystem
24+
```
25+
26+
### 2. Get omap key/value
27+
28+
a. get omapkey (suffix of csi.volumes.default is value used for the CLI option
29+
[--instanceid](deploy-rbd.md#configuration) in the provisioner deployment.)
30+
31+
`[$]rados listomapkeys csi.volumes.default -p pool_name | grep pv_name`
32+
33+
```bash
34+
$ rados listomapkeys csi.volumes.default -p kube_csi | grep pvc-bc537af8-67fc-4963-99c4-f40b3401686a
35+
csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a
36+
```
37+
38+
b. get omapval
39+
40+
`[$]rados getomapval csi.volumes.default omapkey -p pool_name`
41+
42+
```bash
43+
$ rados getomapval csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a
44+
-p kube_csi
45+
value (36 bytes) :
46+
00000000 64 64 32 34 37 33 64 30 2d 36 61 38 63 2d 31 31 |dd2473d0-6a8c-11|
47+
00000010 65 61 2d 39 31 31 33 2d 30 61 64 35 39 64 39 39 |ea-9113-0ad59d99|
48+
00000020 35 63 65 37 |5ce7|
49+
00000024
50+
```
51+
52+
### 3. Delete the RBD image or CephFS subvolume
53+
54+
a. remove rbd image(csi-vol-omapval, the prefix csi-vol is value of [volumeNamePrefix](deploy-rbd.md#configuration))
55+
56+
`[$]rbd remove rbd_image_name -p pool_name`
57+
58+
```bash
59+
$ rbd remove csi-vol-dd2473d0-6a8c-11ea-9113-0ad59d995ce7 -p kube_csi
60+
Removing image: 100% complete...done.
61+
```
62+
63+
b. remove cephfs subvolume(csi-vol-omapval)
64+
65+
`[$]ceph fs subvolume rm volume_name subvolume_name group_name`
66+
67+
```bash
68+
$ ceph fs subvolume rm cephfs csi-vol-340daf84-5e8f-11ea-8560-6e87b41d7a6e csi
69+
```
70+
71+
### 4. Delete omap object and omapkey
72+
73+
a. delete omap object
74+
75+
`[$]rados rm csi.volume.omapval -p pool_name`
76+
77+
```bash
78+
$ rados rm csi.volume.dd2473d0-6a8c-11ea-9113-0ad59d995ce7 -p kube_csi
79+
```
80+
81+
b. delete omapkey
82+
83+
`[$]rados rmomapkey csi.volumes.default csi.volume.omapkey -p pool_name`
84+
85+
```
86+
$ rados rmomapkey csi.volumes.default csi.volume.pvc-bc537af8-67fc-4963-99c4-f40b3401686a
87+
-p kube_csi
88+
```
89+
90+
### 5. Delete PV
91+
92+
a. delete pv
93+
94+
`[$] kubectl delete pv pv_name -n namespace`
95+
96+
```bash
97+
$ kubectl delete pv pvc-bc537af8-67fc-4963-99c4-f40b3401686a -n prometheus
98+
persistentvolume "pvc-bc537af8-67fc-4963-99c4-f40b3401686a" deleted
99+
```

0 commit comments

Comments
 (0)