Skip to content

Dynamic Configuration  #1201

@antoniivanov

Description

@antoniivanov

What is the feature request? What problem does it solve?

Currently, administrators of VDK Control Service configure data jobs using either:

  • a configuration type of plugin of VDK SDK
  • Update Control Service configuration with the helm chart vdkOptions value then upgrade the helm release and then re-deploy all jobs that need the new configuration.
    Configuration can be set per data job, for all data jobs.
    It's a requirement that job X sees only configuration for job X (or not for Y)

It look like this:

We store the configuration in k8s secrets resource and mount it in the Control Service on deploy of job set them as environment variables:

Issues with those approaches are:

  • The first approach does not allow for configuration-sensitive configuration (passwords)
  • The second approach is very long and error-prone. It may take hours to fully re-deploy 1000 jobs for example
  • Local and "cloud" (deployed) job configuration may differ

The main issue we want to fix with this change is the time to configure .

Suggested solution

While the plans are for #832 to solve this fully.
Having a full-blown System Config API is still a lot of work.

The proposal here is to do much smaller change that would enable administrators to configure jobs much quicker (without needing to redeploy each job):

The operator would still edit the helm chart values.yaml, with a slightly different format.
The key is the job name or special keys like "all" which donates all jobs
And the value is the same format as config.ini of a data job

vdkOptions: 
    job-1: | 
       [vdk]
       VDK_CONFIG = one
    job-2: | 
       [vdk]
      VDK_CONFIG = two
   all: | 
       [vdk]
       VDK_CONFIG = three

Out of these using helm templating language (https://helm.sh/docs/chart_template_guide/control_structures/#looping-with-the-range-action) the following would be created

apiVersion: v1
kind: Secret
metadata:
  name: vdk-options-secrets
type: Opaque
data:
  job-1: "configuration ini file of job 2 in base64"
  job-2: "configuration init file of job 2 in base64"
  all: "configuration ini file for all jobs in base64"

Then this would be mounted on the CronJob (hence Job/Pod ) of the Data Job

spec:
  template:
    spec:
      containers:
      - name: job-1
        image: job-image
        volumeMounts:
            - name: secrets
              mountPath: "/etc/secrets"
              readOnly: true
      volumes:
        - name: secrets
          secret:
            secretName: vdk-options-secrets
            items:
              - key: job-1
                path: vdk-options.ini
              - key: all
                path: vdk-options-all.ini
            optional: true

Thos would mount the vdk options.ini if it exists or it's mapped as empty file if not (thanks to optional=True)

And finally, we extend vdk run to be able to support multiple configuration files

export VDK_CONFIG_FILES="/etc/secrets/vdk-options.ini;/etc/secrets/vdk-options-all.ini" 

VDK_CONFIG_FILES would have higher priority that user provided job-specific config file.

Additional context
Add any other context or screenshots about the feature request here.

I think this could server as incremental step toward #832 as then we can create a API logic that would set those through an API instead of through helm.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions