You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now when deleting a registered robot (by deleting the robot-cr), the associated pubkey is not deleted, leading to an accumulation of stale pubkey configmaps in the app-tokenvendor namespace.
Some ideas:
token vendor could watch robot-crs and also delete pubkeys when robots are deleted. I should not auto delete pubkey where we don't have a robot-cr in the cloud as we support a dev-setup, where the robot-cr is not synced to the cloud.
token-vendor could build an in-memory map of last seen timestamps and on a low rate (every 15 min), write those back to the pubkeys. Then one can script against this.
token-vendor could use a counter metric for the verify requests and label them with the robot-id. Not sure if this would cause too high cardinality.
We should also consider to label the pub-keys for easy filtering in the backup_robots.sh script.
The text was updated successfully, but these errors were encountered:
This will let us efficiently filter, e.g. when backing them up. Adjust
the backup script to filter by label.
Existing registrytion can be backfilled by e.g.:
```bash
function kc {
kubectl --context="${KUBE_CONTEXT}" -n app-token-vendor "$@"
}
for cm in $(kc get cm -o name | egrep "^configmap/robot-"); do
kc label "$cm" --overwrite app.kubernetes.io/managed-by=token-vendor
done
```
See also: #320
See #PR/318.
Right now when deleting a registered robot (by deleting the robot-cr), the associated pubkey is not deleted, leading to an accumulation of stale pubkey configmaps in the app-tokenvendor namespace.
Some ideas:
We should also consider to label the pub-keys for easy filtering in the backup_robots.sh script.
The text was updated successfully, but these errors were encountered: