-
Notifications
You must be signed in to change notification settings - Fork 361
feat: add config validation and enable config key deletion #1734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| with open(root_dir / scenario["file"]) as f: | ||
| scenario_config = yaml.safe_load(f) | ||
| update_config(snakemake.config, scenario_config[snakemake.wildcards.run]) | ||
| prune_config_deletes(snakemake.config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we would want to validate here as well? in principle the DAG creation should have validated already, so i decided against it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory, extending the pruning could make the configuration invalid, couldn't it? Which means it would be best to validate the configuration whenever it is changed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
During DAG generation the same pruning is also applied and then validated, so i think it would still be fine.
|
Ref #1547 |
Very true, exactly the same concept, should have reviewed again. Only thing that is new here is null-value pruning and application to scenario_config. What is the state there? |
|
The state is pretty much still as described in the PR. JSON Schema is quite verbose, difficult to set up, and unable to handle complicated validations. Since pydantic now supports full export to JSON Schema, I would switch to a minimal pydantic implementation, which we could then extend. The issue with the minimal implementation is that we cannot replace all the doc tables at once. I'm not sure if pruning should be done with Pydantic as well. But could be moved also later. Otherwise, if you're only defining a placeholder here to have something for soft forks, go ahead |
Thanks, indeed. This here is mainly thought of as a placeholder where soft-forks can add custom validation functions. Good to know about pydantic as new plan and i prefer the Will adapt here. |
|
To be closed by #1912. |
Changes proposed in this Pull Request
We are encountering during extension of pypsa-eur in soft-forks that we cannot account for all possible config options. Therefore i am proposing a central place in maybe
scripts/config.py > validate_configwhere one would check for consistency of configuration settings.This
validate_configfunction is called fromscenario_configfor config providers and for the initial config (at the top ofSnakefile), it is expected to raise an exception when there is a conflict.At the moment the function is a stub, a good schema with a detailed report of all the failed validations should be added before this is merged. But i would like to get a quick idea of whether you think a mechanism like that is valid? This might also be a good place for a jsonschema or pydantic model evaluation?
The second problem this PR is addressing is that the config is recursively pruned from
nullvalues so that one can remove keys from dictionaries in higher priority configs.@tgilon @daniel-rdt
Checklist
envs/environment.yaml.config/config.default.yaml.doc/configtables/*.csv.doc/data_sources.rst.doc/release_notes.rstis added.