Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regex support in ReplacementTransformer broken. #5128

Open
m-trojanowski opened this issue Apr 13, 2023 · 26 comments
Open

Regex support in ReplacementTransformer broken. #5128

m-trojanowski opened this issue Apr 13, 2023 · 26 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/under-consideration

Comments

@m-trojanowski
Copy link

m-trojanowski commented Apr 13, 2023

What happened?

Hello,
Up to kustomize version 4.5.7 I was able to use ReplacementTransformer to actually find kinds based on regexes inside fieldPaths field like so:

...
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

replacements:
  - source:
      namespace: myservice
      kind: ConfigMap
      name: myservice-config
      fieldPath: data.MYSERVICE_VERSION
    targets:
      - select:
          kind: StatefulSet
        fieldPaths: &fieldPaths1
          - spec.template.spec.containers.[name=myservice-*].image
          - spec.template.spec.initContainers.[name=myservice-*].image
        options: &options
          delimiter: ":"
          index: 1
      - select:
          kind: Deployment
        fieldPaths: *fieldPaths1
        options: *options
...

Here I'm searching for all StatefulSet and Deployment containers with name that starts with myservice-*
and replace it's image tag with data from "myservice-config" ConfigMap. To do so I'm splitting image fieldPath result using delimeter : and picking tag part of string using index: 1 option.
This is no longer possible since version 5.0.0 and I believe I found code that is responsible for this change:
https://github.com/kubernetes-sigs/kustomize/blob/master/api/filters/replacement/replacement.go#L196

		if len(targetFields) == 0 {
			return errors.Errorf(fieldRetrievalError(fp, createKind != 0))
		}

This prevents any kind of searches using regexes.

What did you expect to happen?

I expected to actually render kubernetes manifests.
Instead all I get is the following error:

./kustomize  build --enable-helm /home/mtrojanowski/Projects/myProject/deployments/environments/dev
Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/home/mtrojanowski/Projects/myProject/deployments/components/myservice-version': unable to find field \"spec.template.spec.initContainers.[name=myservice-*].image\" in replacement target"

How can we reproduce it (as minimally and precisely as possible)?

.
├── components
│   └── myservice-version
│       └── kustomization.yaml
├── environments
│   └── dev
│       ├── kustomization.yaml
│       └── overlay
│           ├── config.properties
│           └── kustomization.yaml
└── myserviceap
    ├── kustomization.yaml
    └── resources.yaml

components/myservice-version/kustomization.yaml:

# components/myservice-version/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component

replacements:
  - source:
      namespace: myservice
      kind: ConfigMap
      name: myservice-config
      fieldPath: data.MYSERVICE_VERSION
    targets:
      - select:
          kind: StatefulSet
        fieldPaths: &fieldPaths1
          - spec.template.spec.containers.[name=myservice-*].image
          - spec.template.spec.initContainers.[name=myservice-*].image
        options: &options
          delimiter: ":"
          index: 1
      - select:
          kind: Deployment
        fieldPaths: *fieldPaths1
        options: *options

environments/dev/kustomization.yaml:

# environments/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myservice

resources:
- ./overlay
- ../../myserviceap

components:
- ../../components/myservice-version

environments/dev/overlay/kustomization.yaml:

# environments/dev/overlay/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: myservice

configMapGenerator:
- name: myservice-config
  envs:
  - config.properties
generatorOptions:
  disableNameSuffixHash: true
  labels:
    type: generated
  annotations:
    note: generated

environments/dev/overlay/config.properties:

MYSERVICE_VERSION=n1287

myserviceap/kustomization.yaml:

# myserviceap/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- resources.yaml

myserviceap/resources.yaml:

# myserviceap/resources.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myservice-alerting
  labels:
    app: myservice-alerting
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-alerting
  template:
    metadata:
      labels:
        app: myservice-alerting
    spec:
      containers:
      - name: myservice-alerting
        image: myservice-alerting
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80
        startupProbe:
          httpGet:
            path: /health
            port: 80
          periodSeconds: 10
          failureThreshold: 30
        livenessProbe:
          httpGet:
            path: /health
            port: 80
          periodSeconds: 30
          failureThreshold: 5
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myservice-collection
  labels:
    app: myservice-collection
spec:
  replicas: 1
  serviceName: myservice-collection
  selector:
    matchLabels:
      app: myservice-collection
  template:
    metadata:
      labels:
        app: myservice-collection
    spec:
      containers:
      - name: myservice-collection
        image: myservice-collection
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8000
        startupProbe:
          httpGet:
            path: /api/health
            port: 8000
          periodSeconds: 10
          failureThreshold: 30
        livenessProbe:
          httpGet:
            path: /api/health
            port: 8000
          periodSeconds: 30
          failureThreshold: 5
      - name: collection-elasticsearch
        image: myservice-collection-elasticsearch
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 9200
      - name: myservice-collection-engine
        image: myservice-collection-engine
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 8001
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: myservice-etcd
  labels:
    app: myservice-etcd
  annotations:
    myservice-backup: true
spec:
  replicas: 1
  serviceName: myservice-etcd
  selector:
    matchLabels:
      app: myservice-etcd
  template:
    metadata:
      labels:
        app: myservice-etcd
    spec:
      containers:
      - name: etcd
        image: myservice-etcd
        imagePullPolicy: IfNotPresent
      - name: etcd-backup
        image: myservice-backup
        imagePullPolicy: Always
        ports:
          - containerPort: 8080
---
piVersion: apps/v1
kind: Deployment
metadata:
  name: myservice-prometheus
  labels:
    app: myservice-prometheus
  annotations:
    configmap.reloader.stakater.com/reload: "myservice-prometheus"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-prometheus
  template:
    metadata:
      labels:
        app: myservice-prometheus
    spec:
      serviceAccount: prom-sd
      serviceAccountName: prom-sd
      containers:
      - name: prometheus
        image: myservice-prometheus
        imagePullPolicy: IfNotPresent

Expected output

kustomize version {Version:kustomize/v4.5.7 GitCommit:56d82a8378dfc8dc3b3b1085e5a6e67b82966bd7 BuildDate:2022-08-02T16:35:54Z GoOs:linux GoArch:amd64}
kustomize build --enable-helm environments/dev

apiVersion: v1
data:
  MYSERVICE_VERSION: n1287
kind: ConfigMap
metadata:
  annotations:
    note: generated
  labels:
    type: generated
  name: myservice-config
  namespace: myservice
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: myservice-alerting
  name: myservice-alerting
  namespace: myservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-alerting
  template:
    metadata:
      labels:
        app: myservice-alerting
    spec:
      containers:
      - image: myservice-alerting:n1287
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /health
            port: 80
          periodSeconds: 30
        name: myservice-alerting
        ports:
        - containerPort: 80
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /health
            port: 80
          periodSeconds: 10
---
kind: Deployment
metadata:
  annotations:
    configmap.reloader.stakater.com/reload: myservice-prometheus
  labels:
    app: myservice-prometheus
  name: myservice-prometheus
  namespace: myservice
piVersion: apps/v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-prometheus
  template:
    metadata:
      labels:
        app: myservice-prometheus
    spec:
      containers:
      - image: myservice-prometheus
        imagePullPolicy: IfNotPresent
        name: prometheus
      serviceAccount: prom-sd
      serviceAccountName: prom-sd
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: myservice-collection
  name: myservice-collection
  namespace: myservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-collection
  serviceName: myservice-collection
  template:
    metadata:
      labels:
        app: myservice-collection
    spec:
      containers:
      - image: myservice-collection:n1287
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /api/health
            port: 8000
          periodSeconds: 30
        name: myservice-collection
        ports:
        - containerPort: 8000
        startupProbe:
          failureThreshold: 30
          httpGet:
            path: /api/health
            port: 8000
          periodSeconds: 10
      - image: myservice-collection-elasticsearch
        imagePullPolicy: IfNotPresent
        name: collection-elasticsearch
        ports:
        - containerPort: 9200
      - image: myservice-collection-engine:n1287
        imagePullPolicy: IfNotPresent
        name: myservice-collection-engine
        ports:
        - containerPort: 8001
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    myservice-backup: "true"
  labels:
    app: myservice-etcd
  name: myservice-etcd
  namespace: myservice
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myservice-etcd
  serviceName: myservice-etcd
  template:
    metadata:
      labels:
        app: myservice-etcd
    spec:
      containers:
      - image: myservice-etcd
        imagePullPolicy: IfNotPresent
        name: etcd
      - image: myservice-backup
        imagePullPolicy: Always
        name: etcd-backup
        ports:
        - containerPort: 8080

Actual output

kustomize version v5.0.1
kustomize build --enable-helm environments/dev

Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/home/mtrojanowski/Projects/myProject/deployments/components/myservice-version': unable to find field \"spec.template.spec.initContainers.[name=myservice-*].image\" in replacement target"

Kustomize version

v5.0.1

Operating system

None

@m-trojanowski m-trojanowski added the kind/bug Categorizes issue or PR as related to a bug. label Apr 13, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Apr 13, 2023
@boscard
Copy link

boscard commented Apr 13, 2023

I'm facing very similar issues in my projects. If behavior will not be recovered then I will have to rewrite a lot of Kustomize specs I'm using now :(

@travispeloton
Copy link

Similar, but different, with 4.x to 5.x I'm seeing,

Error: unable to find field "spec.template.metadata.labels.[app.kubernetes.io/version]" in replacement target

for a replacement template like

source:
  kind: Rollout
  name: XXX
  fieldPath: spec.template.spec.containers.0.image
  options:
    delimiter: ':'
    index: 1
targets:
  - select:
      kind: Namespace
    fieldPaths:
      - metadata.labels.[app.kubernetes.io/version]
  - select:
      namespace: XXX
    fieldPaths:
      - metadata.labels.[app.kubernetes.io/version]
      - spec.template.metadata.labels.[app.kubernetes.io/version]
      - spec.template.metadata.labels.[tags.datadoghq.com/version]

@natasha41575
Copy link
Contributor

/assign

@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label May 17, 2023
@ciaccotaco
Copy link

ciaccotaco commented May 18, 2023

I am also experiencing this issue with replacements.

The previous behavior on create: false was to ignore if missing. Now, create: false fails if missing.

This is the the config that I am using.

source:
  kind: ConfigMap
  name: my-config-map
  fieldPath: data.my-field
targets:
  - select:
      apiVersion: storage.k8s.io
      kind: StorageClass
    fieldPaths:
      - parameters.network
      options:
        create: false

IF the following exists, then the replacement works.

apiVersion: storage.k8s.io/v1
kind: StorageClase
metadata:
  name: my-storage-class
parameters:
  network: <REPLACE_ME>

But if the resource does not contain parameters.network, then kustomize fails with this error:

Error: unable to render the source configs in /path/to/directory: failed to run kustomize build in /path/to/directory, stdout: : Error: accumulating components: accumulateDirectory: "recursed accumulation of path '/path/to/directory/components': unable to find field "parameters.network" in replacement target

@natasha41575
Copy link
Contributor

The regex issue is interesting and I will try to find time to think about it.

But -

The previous behavior on create: false was to ignore if missing. Now, create: false fails if missing.

That was an intentional change that we announced in the release notes. Because it was intentional and released with a major bump, I don't think we would consider that an issue or a regression.

@travispeloton
Copy link

@natasha41575 any thoughts about what I ran across, which wasn't using a regex, but had the same error as originally reported?

@nicklinnell
Copy link

I'm having the same issue as @m-trojanowski with replacements no longer working after 4.5.7

@m-trojanowski
Copy link
Author

m-trojanowski commented May 30, 2023

@natasha41575
It would be really great to have such feature. Before 5.0.0 I was able to cycle trough all my manifests and alter only ones that I wanted to, but now I'm quite limited because of this. Maybe there is a chance to introduce some extra flag to alter this behavior?
I can offer my time as well to help to implement it if needed.
Cheers!

@angelbarrera92

This comment was marked as resolved.

boscard added a commit to boscard/kustomize that referenced this issue Jun 21, 2023
Preserving code from
kubernetes-sigs#5221 for investigation
and future improvements to resolve kubernetes-sigs#5128
@RomanOrlovskiy
Copy link

Hi @natasha41575. I am facing the same issues with the new create: false behavior, which now fails when the resource is missing. This completely breaks my current workflow of using multiple overlays to create preview environments, so I have to stick with v4.

It would be great to have an option/flag to allow using the previous behavior for specific replacements.

@boscard
Copy link

boscard commented Jul 25, 2023

Hi @natasha41575,
Do you know if there is any plan to address this issue? My current specs are not rendering properly because of change reported here and because of that I can't migrate to Kustomize v5 :(

@abscondment
Copy link

I'm experiencing this regex issue with kustomize 5.1.0. Would love to see it fixed.

@bartselect
Copy link

Hey any update on this @natasha41575?

@sass1997
Copy link

sass1997 commented Aug 17, 2023

May there should be an additional option continueOnFailure: true or skip: true which covers the case that there are possible replacement targets which don't have and don't need the value. I'm heavily using the labelSelector to not manage all resource by name. Unfortunatly I'm now forced to manage the restrict list with the resource which are failiing currently

@m-trojanowski
Copy link
Author

m-trojanowski commented Aug 21, 2023

Hi @natasha41575 ,
I've added #5280 draft so we can continue discussion on how to approach this issue and whether it's even possible.
Cheers!

@KlausMandola
Copy link

May there should be an additional option continueOnFailure: true or skip: true which covers the case that there are possible replacement targets which don't have and don't need the value. I'm heavily using the labelSelector to not manage all resource by name. Unfortunatly I'm now forced to manage the restrict list with the resource which are failiing currently

I second this. As currently the 5.x versions rendered my whole project useless.
I use kustomize together with ArgoCD and have set up a project consisting of 10+ Openshift namespaces being provided with 100+ components (dc, svc, rt, you name it) each with the Openshift templates being generated by kustomize.

Components may share certain parameters, or a parameter may be used by only one component. The parameters are provided by a key-value-template and are being replaced with replacements blocks. This mechanism is now completely broken and I will have to rewrite the whole thing and use rejects like crazy or split up the project into hundreds of small files which will result in a complete mess.

@m-trojanowski thanks for your proposal I hope it will be taken into consideration
I can live with an additional option in the replacements block, but would rather propose a commandline option to omit the behaviour to error out on not found targets.

@boscard
Copy link

boscard commented Sep 14, 2023

@KlausMandola as I have very similar issue in my projects right now I'm planning to migrate Kustomize to Jsonnet. It should be not very complicated as yaml can be easily transformed to json and the simplest use of Jsonnet is about to use just json files.

abscondment added a commit to abscondment/nixpkgs that referenced this issue Sep 18, 2023
Include a kustomize_4 build, since v5 introduced breaking changes (kubernetes-sigs/kustomize#5128)
@renaudguerin
Copy link

We are also badly affected by this change (see my other comment on the PR)

I have opened a formal feature request for allowing users to opt for the pre-5.0.0 behavior with a flag : #5440

@boscard
Copy link

boscard commented Nov 14, 2023

@renaudguerin please take a look also on this PR #5280 which should also resolve this issue but for some reason nobody is willing to review it :(

TheBrainScrambler pushed a commit to TheBrainScrambler/nixpkgs that referenced this issue Dec 23, 2023
Include a kustomize_4 build, since v5 introduced breaking changes (kubernetes-sigs/kustomize#5128)
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 12, 2024
@bartselect
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 13, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 13, 2024
@renaudguerin
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 15, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 13, 2024
@boscard
Copy link

boscard commented Aug 20, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2024
@ShivamAgrawal30
Copy link

Hi,
is there any update on this issue?

i am in kustomize version v5.4.3 and still hitting this issue
Error: unable to find field "spec.xx.xxx" in replacement target

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/under-consideration
Projects
None yet
Development

No branches or pull requests