Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secrets not getting created on new namespaces. #150

Closed
dsarath2205 opened this issue Feb 17, 2021 · 23 comments
Closed

Secrets not getting created on new namespaces. #150

dsarath2205 opened this issue Feb 17, 2021 · 23 comments
Labels

Comments

@dsarath2205
Copy link

dsarath2205 commented Feb 17, 2021

Consider a situation where I have deployed reflector 4 days back and created a namespace after two days. The reflector is only copying secrets to the namespaces that are created 4 days back but it is not performing any action on the namespace which was created two days back.

Can you please let me how I can configure reflector in a way so that it can copy secrets to the namespaces which are created even after the deployment of reflector.

Config:
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: ""
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"

@stale
Copy link

stale bot commented Mar 19, 2021

Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Mar 19, 2021
@sdurrheimer
Copy link

Seems a have a similar issue with random new namespaces not getting the reflected secrets. Restarting the reflector solves the issue temporarily.

@stale
Copy link

stale bot commented Apr 9, 2021

Removed stale label.

@stale stale bot removed the stale label Apr 9, 2021
@icehaunter
Copy link

I've also encountered the exact same issue unfortunately. At some point Refector just stopped doing its job for new deployments to new namespaces

@stale
Copy link

stale bot commented Jun 3, 2021

Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 3, 2021
@icehaunter
Copy link

Issue persists, no stale please

@stale
Copy link

stale bot commented Jun 4, 2021

Removed stale label.

@stale stale bot removed the stale label Jun 4, 2021
@stale
Copy link

stale bot commented Jun 16, 2021

Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 16, 2021
@sdurrheimer
Copy link

bump

@stale
Copy link

stale bot commented Jun 17, 2021

Removed stale label.

@stale stale bot removed the stale label Jun 17, 2021
@stale
Copy link

stale bot commented Jun 26, 2021

Automatically marked as stale due to no recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 26, 2021
@stale
Copy link

stale bot commented Jul 8, 2021

Automatically closed stale item.

@stale stale bot closed this as completed Jul 8, 2021
winromulus added a commit that referenced this issue Oct 16, 2021
- New multi-arch pipeline with proper tagging convention
- Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191
- Fixed healthchecks. Fixes: #208
- Removed Slack support links (GitHub issues only). Fixes: #199
- Simplified startup and improved performance. Fixes: #194
- Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
winromulus added a commit that referenced this issue Oct 16, 2021
- New multi-arch pipeline with proper tagging convention
- Removed cert-manager extension (deprecated due to new support from cert-manager) Fixes: #191
- Fixed healthchecks. Fixes: #208
- Removed Slack support links (GitHub issues only). Fixes: #199
- Simplified startup and improved performance. Fixes: #194
- Huge improvements in performance and stability. Fixes: #187 #182 #166 #150 #138 #121 #108
@TaylorChristie
Copy link

I'm still seeing this issue on the latest version (6.1.47). Restarting the pod solves the issue, but I just get the repeated log and no other details

2022-05-11 19:00:21.856 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:00:21.856 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:40.0056000. Faulted: False.
2022-05-11 19:00:21.856 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-11 19:00:53.332 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Canceled using token.
2022-05-11 19:00:53.332 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:01:40.0047759. Faulted: False.
2022-05-11 19:00:53.332 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-11 19:01:24.955 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Canceled using token.
2022-05-11 19:01:24.955 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:01:40.0041689. Faulted: False.
2022-05-11 19:01:24.955 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-11 19:02:01.856 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:02:01.857 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:40.0001084. Faulted: False.
2022-05-11 19:02:01.857 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-11 19:02:33.330 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Canceled using token.
2022-05-11 19:02:33.330 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:01:39.9980486. Faulted: False.
2022-05-11 19:02:33.330 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-11 19:03:04.958 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Canceled using token.
2022-05-11 19:03:04.958 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:01:40.0032231. Faulted: False.
2022-05-11 19:03:04.958 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-11 19:03:41.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:03:41.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:40.0029816. Faulted: False.
2022-05-11 19:03:41.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-11 19:04:13.334 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Canceled using token.
2022-05-11 19:04:13.334 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:01:40.0033626. Faulted: False.
2022-05-11 19:04:13.334 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-11 19:04:44.959 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Canceled using token.
2022-05-11 19:04:44.960 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:01:40.0011194. Faulted: False.
2022-05-11 19:04:44.960 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-11 19:05:21.859 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:05:21.859 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:39.9991666. Faulted: False.
2022-05-11 19:05:21.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-11 19:05:53.339 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Canceled using token.
2022-05-11 19:05:53.339 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:01:40.0054144. Faulted: False.
2022-05-11 19:05:53.339 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-11 19:06:24.963 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Canceled using token.
2022-05-11 19:06:24.963 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:01:40.0035626. Faulted: False.
2022-05-11 19:06:24.963 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-11 19:07:01.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:07:01.860 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:40.0002169. Faulted: False.
2022-05-11 19:07:01.861 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2022-05-11 19:07:33.340 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Canceled using token.
2022-05-11 19:07:33.340 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Session closed. Duration: 00:01:40.0007353. Faulted: False.
2022-05-11 19:07:33.340 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2022-05-11 19:08:04.961 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Canceled using token.
2022-05-11 19:08:04.961 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Session closed. Duration: 00:01:39.9981792. Faulted: False.
2022-05-11 19:08:04.962 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources
2022-05-11 19:08:41.867 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Canceled using token.
2022-05-11 19:08:41.867 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Session closed. Duration: 00:01:40.0061617. Faulted: False.
2022-05-11 19:08:41.867 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources

@mr-davidc
Copy link

I'm in a similar boat as @TaylorChristie.

Reflector is running but when cert-manager eventually creates the TLS secret (there is a significant delay since cert-manager takes time to verify the CertificateRequest), Reflector doesn't seem to realise that this new TLS secret has been created and fails to replicate it to a newly created namespace when a deployment occurs.

Restarting the pod does make the secret sync across but this shouldnt be required?

@michalgoldys
Copy link

Up - we got the same situation.

@liewwy19
Copy link

we had the same situation today on one of our cluster ...

@blackliner
Copy link

same here 😞

@winromulus
Copy link
Contributor

@blackliner are you running the latest version?

@blackliner
Copy link

Helm chart version v7.0.151

@winromulus
Copy link
Contributor

Try the latest please and let me know if the issue with the secrets watcher is fixed.

@UntouchedWagons
Copy link

I'm using v7.1.216 and nothing's happening. The Secret that Cert-Manager makes has the correct annotations.

@alteredtech
Copy link

Not having secrets populate to other namespaces either.
Running reflector 7.1.262
k3s v1.29.2+k3s1

Installed on default namespace

> kubectl describe deployment reflector
Name:                   reflector
Namespace:              default
CreationTimestamp:      Thu, 04 Jul 2024 16:27:36 -0600
Labels:                 app.kubernetes.io/instance=reflector
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=reflector
                        app.kubernetes.io/version=7.1.262
                        helm.sh/chart=reflector-7.1.262
Annotations:            deployment.kubernetes.io/revision: 2
                        meta.helm.sh/release-name: reflector
                        meta.helm.sh/release-namespace: default
Selector:               app.kubernetes.io/instance=reflector,app.kubernetes.io/name=reflector
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=reflector
                    app.kubernetes.io/name=reflector
  Annotations:      kubectl.kubernetes.io/restartedAt: 2024-07-04T17:26:34-06:00
  Service Account:  reflector
  Containers:
   reflector:
    Image:      emberstack/kubernetes-reflector:7.1.262
    Port:       25080/TCP
    Host Port:  0/TCP
    Liveness:   http-get http://:http/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:http/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
    Startup:    http-get http://:http/healthz delay=0s timeout=1s period=5s #success=1 #failure=10
    Environment:
      ES_Serilog__MinimumLevel__Default:        Information
      ES_Reflector__Watcher__Timeout:           
      ES_Reflector__Kubernetes__SkipTlsVerify:  false
    Mounts:                                     <none>
  Volumes:                                      <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  reflector-9d6b579bf (0/0 replicas created)
NewReplicaSet:   reflector-756b9c7795 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  63m    deployment-controller  Scaled up replica set reflector-9d6b579bf to 1
  Normal  ScalingReplicaSet  5m1s   deployment-controller  Scaled up replica set reflector-756b9c7795 to 1
  Normal  ScalingReplicaSet  4m50s  deployment-controller  Scaled down replica set reflector-9d6b579bf to 0 from 1

One of my cert yaml files

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: stage-demo-example-io
  namespace: default
spec:
  secretTemplate:
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: "external-hosts, traefik, default"
  secretName: stage-demo-example-io-tls
  issuerRef:
    name: letsencrypt-stage
    kind: ClusterIssuer
  commonName: "*.demo.example.io"
  dnsNames:
  - "demo.example.io"
  - "*.demo.example.io"

Secrets in default

> kubectl get secret
NAME                              TYPE                 DATA   AGE
digitalocean-dns                  Opaque               1      18h
prod-demo-example-io-tls          kubernetes.io/tls    2      17h
sh.helm.release.v1.reflector.v1   helm.sh/release.v1   1      66m
stage-demo-example-io-tls         kubernetes.io/tls    2      18h
stage-home-example-io-tls         kubernetes.io/tls    2      9h

Then empty secrets in external-hosts

> kubectl get secret -n external-hosts
No resources found in external-hosts namespace.

Logs from the reflector pod

> kubectl logs reflector-756b9c7795-btzcr
2024-07-04 23:26:39.223 +00:00 [INF] () Starting host
2024-07-04 23:26:39.533 +00:00 [INF] (ES.Kubernetes.Reflector.Core.NamespaceWatcher) Requesting V1Namespace resources
2024-07-04 23:26:39.552 +00:00 [INF] (ES.Kubernetes.Reflector.Core.SecretWatcher) Requesting V1Secret resources
2024-07-04 23:26:39.562 +00:00 [INF] (ES.Kubernetes.Reflector.Core.ConfigMapWatcher) Requesting V1ConfigMap resources

@alteredtech
Copy link

AHHH I figured out my issue. I over read the 'auto' part in the annotations.
I needed to have this

  secretTemplate:
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "external-hosts, traefik, default"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

No branches or pull requests