Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[helm] regression: logs are not being tailed in 1.9.0 #4117

Closed
stanislav-zaprudskiy opened this issue May 6, 2020 · 9 comments · Fixed by #4122
Closed

[helm] regression: logs are not being tailed in 1.9.0 #4117

stanislav-zaprudskiy opened this issue May 6, 2020 · 9 comments · Fixed by #4122
Assignees
Labels
deploy/helm kind/bug Something isn't working priority/p0 Highest priority. We are actively looking at delivering it.

Comments

@stanislav-zaprudskiy
Copy link

Expected behavior

skaffold run --tail tails containers logs

Actual behavior

skaffold run --tail doesn't tail containers logs.

However! If I downgrade to 1.8.0, it tails just fine using the same command line and skaffold.yaml. With debug verbosity, the messages below just don't show up in 1.9.0 after port forwarding has been started:

INFO[0006] Streaming logs from pod: bpid-69d8988d46-k2p9d container: web
INFO[0006] Streaming logs from pod: bpid-69d8988d46-k2p9d container: init
INFO[0006] Streaming logs from pod: bpid-69d8988d46-k2p9d container: app
DEBU[0006] Running command: [kubectl --context docker-desktop logs --since=7s -f bpid-69d8988d46-k2p9d -c init --namespace default]
DEBU[0006] Running command: [kubectl --context docker-desktop logs --since=7s -f bpid-69d8988d46-k2p9d -c web --namespace default]
DEBU[0006] Running command: [kubectl --context docker-desktop logs --since=7s -f bpid-69d8988d46-k2p9d -c app --namespace default]

I also tried skaffold dev --tail without any luck.

Information

  • Skaffold version: 1.9.0
  • Operating system: macOS Catalina
  • Contents of skaffold.yaml:
apiVersion: skaffold/v2alpha4
kind: Config
build:
  tagPolicy:
    sha256: {}
  local:
    concurrency: 0
  artifacts:
    - image: bpid-app
      context: .
      custom:
        buildCommand: |-
          devops/skaffold/docker-build.bash devops/docker/app/Dockerfile devops/skaffold/.env.app
        dependencies:
          dockerfile:
            path: devops/docker/app/Dockerfile
            buildArgs:
              COMPOSER_AUTH: '{"gitlab-token": {"gitlab.com": "{{.GITLAB_PERSONAL_ACCESS_TOKEN}}"}}'
    - image: bpid-web
      context: .
      custom:
        buildCommand: |-
          devops/skaffold/docker-build.bash devops/docker/web/Dockerfile devops/skaffold/.env.web
        dependencies:
          dockerfile:
            path: devops/docker/app/Dockerfile

# https://skaffold.dev/docs/pipeline-stages/port-forwarding/
portForward:
  - resourceType: Service
    resourceName: bpid-mysql
    port: 3306
    localPort: 30336

profiles:
  - name: local-k8s
    activation:
      - kubeContext: microk8s
      - kubeContext: docker-desktop
      - kubeContext: docker-for-desktop
    deploy:
      helm:
        releases:
          - name: bpid
            chartPath: devops/helm-charts/laravel
            wait: true
            skipBuildDependencies: true
            imageStrategy:
              helm: {}
            values:
              image: bpid-app
              nginx.image: bpid-web
            valuesFiles:
              - devops/helm-values/local.yaml
              - devops/helm-values/local-secrets.yaml
              - devops/helm-values/local-personal.yaml
        flags:
          install:
            - --dep-up
            - --atomic
            - --timeout=120
          upgrade:
            - --install
            - --atomic
            - --cleanup-on-fail
            - --force
            - --timeout=60

Steps to reproduce the behavior

Compare output of skaffold run --tail or skaffold run --tail=true in 1.9.0 and 1.8.0.

@dgageot dgageot added deploy/helm kind/bug Something isn't working priority/p1 High impact feature/bug. labels May 6, 2020
@dgageot dgageot self-assigned this May 6, 2020
@dgageot
Copy link
Contributor

dgageot commented May 6, 2020

I'm the one who broke that... I fixed an issue where logging and port forwarding wouldn't take the runId into account. It was fixed by adding a selector on the label that carries the runID. However, I forgot that our labelling mechanism is flakey with helm. Those labels get applied later and that will lead to lost logs.

Do you still see logs after some time?

@dgageot dgageot changed the title regression: logs are not being tailed in 1.9.0 [helm] regression: logs are not being tailed in 1.9.0 May 6, 2020
@dgageot
Copy link
Contributor

dgageot commented May 6, 2020

@stanislav-zaprudskiy which version of helm are you using?

@dgageot dgageot added priority/p0 Highest priority. We are actively looking at delivering it. and removed priority/p1 High impact feature/bug. labels May 6, 2020
@dgageot
Copy link
Contributor

dgageot commented May 6, 2020

I think it's worst than what I thought. Logging is broken for every type of Kubernetes resource but a simple pod.

@dgageot
Copy link
Contributor

dgageot commented May 6, 2020

I take that back. The problem is only with Helm. Log tailing pods or deployments deployed with kubectl is ok.

@dgageot
Copy link
Contributor

dgageot commented May 6, 2020

We have at least three ways to fix that:

  • Rollback Only listen to pods for the current RunID #4097 which reintroduces a slightly less important issue
  • Evolve labelDeployResults so that it can patch pods' specs too. That should fix the logging but will, I think, spawn new pods which is kind of a glitch.
  • For helm3, we might be able to use helm template instead of helm install. That should give us the rendered yaml that we can then patch like we do for kubectl.

@stanislav-zaprudskiy
Copy link
Author

stanislav-zaprudskiy commented May 6, 2020

Logs didn't show up after 1h. @dgageot

$ helm version
Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

@briandealwis
Copy link
Member

We can't use helm template — it doesn't play well with upgrade and delete. But helm 3.1 supports a filtering/transforming the manifest as part of installation (#2350 (comment)).

@dgageot
Copy link
Contributor

dgageot commented May 6, 2020 via email

@nkubala
Copy link
Contributor

nkubala commented May 6, 2020

let's rollback the change that introduced this for now. I would be interested in exploring ways we can interact with the manifests generated by helm though, if we can tease out templated manifests before they get sent off by helm that would be a big win.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deploy/helm kind/bug Something isn't working priority/p0 Highest priority. We are actively looking at delivering it.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants