xl2times
is an open source tool to convert TIMES models specified in Excel to a format ready for processing by GAMS.
Development of the tool originally started in a Microsoft repository with an intention to make it easier for anyone to reproduce research results on TIMES models.
TIMES is an open source energy systems model generator developed by the Energy Technology Systems Analysis Program (ETSAP) of the International Energy Agency (IEA) that is used around the world to inform energy policy. It is fully explained in the TIMES Model Documentation.
Multiple approaches to using spreadsheets for specifying TIMES models have been developed, e.g. ANSWER-TIMES and VEDA-TIMES.
At present, xl2times
implements partial support of the Veda approach described in the TIMES Model Documentation PART IV and Veda Documentation.
Support of other approaches may be added over time.
You can install the latest published version of the tool from PyPI using pip (preferably in a virtual environment):
pip install xl2times
You can also install the latest development version by cloning this repository and running the following command in the root directory:
pip install .
After installation, run the following command to see the basic usage and available options:
xl2times --help
The tool's documentation is at http://xl2times.readthedocs.io/ and the source is in the docs/
directory.
The documentation is generated by Sphinx and hosted on ReadTheDocs. We use the following extensions:
myst-parser
: to be able to write documentation in markdownsphinx-book-theme
: the themesphinx-copybutton
: to add copy buttons to code blockssphinxcontrib-apidoc
: to automatically generate API documentation from the Python package
Documentation can be generated locally (after setting up your development environment as described below) by:
cd docs
make html
We recommend installing the tool in editable mode (-e
) in a Python virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -e .[dev]
We use the black code formatter. The pip
command above will install it along with other requirements.
We also use the pyright type checker -- our GitHub Actions check will fail if pyright detects any type errors in your code. You can install pyright in your virtual environment and check your code by running these commands in the root of the repository:
pip install pyright==1.1.304
pyright
Additionally, you can install a git pre-commit that will ensure that your changes are formatted and pyright detects no issues before creating new commits:
pre-commit install
If you want to skip these pre-commit steps for a particular commit, if for instance pyright has issues but you still want to commit your changes to your branch, you can run:
git commit --no-verify
We use the TIMES DemoS models and some public TIMES models as benchmarks.
See our GitHub Actions CI .github/workflows/ci.yml
and the utility script utils/run_benchmarks.py
to see how to we benchmark the tool and check PRs automatically for regression.
If you are a developer, you can use the below instructions to set up and run the benchmarks locally:
./setup-benchmarks.sh
Note that this script assumes you have access to all the relevant repositories (some are private and you'll have to request access) - if not, comment out the inaccessible benchmarks from benchmarks.yml
before running.
Then to run the benchmarks:
# Run a only a single benchmark by name (see benchmarks.yml for name list)
python utils/run_benchmarks.py benchmarks.yml --run DemoS_001-all
# To see the full output logs, and save it in a file for convenience
python utils/run_benchmarks.py benchmarks.yml --run DemoS_001-all --verbose | tee out.txt
# Run all benchmarks (without GAMS run, just comparing CSV data for regressions)
# Note: if you have multiple remotes, set etsap-TIMES/xl2times as the `origin`, as it is used for speed/correctness comparisons.
python utils/run_benchmarks.py benchmarks.yml
# Run benchmarks with regression tests vs main branch
git branch feature/your_new_changes --checkout
# ... make your code changes here ...
git commit -a -m "your commit message" # code must be committed for comparison to `main` branch to run.
python utils/run_benchmarks.py benchmarks.yml
At this point, if you haven't broken anything you should see something like:
Change in runtime: +2.97s
Change in correct rows: +0
Change in additional rows: +0
No regressions. You're awesome!
If you have a large increase in runtime, a decrease in correct rows or fewer rows being produced, then you've broken something and will need to figure out how to fix it.
If your change is causing regressions on one of the benchmarks, a useful way to debug and find the difference is to run the tool in verbose mode and compare the intermediate tables. For example, if your branch has regressions on Demo 1:
# First, on the `main` branch:
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all --verbose > before 2>&1
# Then, on your branch:
git checkout my-branch-name
xl2times benchmarks/xlsx/DemoS_001 --output_dir benchmarks/out/DemoS_001-all --ground_truth_dir benchmarks/csv/DemoS_001-all --verbose > after 2>&1
# And then compare the files `before` and `after`
code -d before after
VS Code will highlight the changes in the two files, which should correspond to any differences in the intermediate tables.
Follow these steps to release a new version of xl2times
and publish it on PyPI:
- Bump the version number in
pyproject.toml
andxl2times/__init__.py
(use Semantic Versioning) - Open a PR with this change titled "Release vX.Y.Z"
- When the PR is merged, create a new release titled "vX.Y.Z". Select "Create a new tag: on publish" and click "Generate release notes" to generate the notes automatically.
- Click "Publish release" to publish the release on GitHub. A GitHub Actions workflow will automatically upload the distribution to PyPI.
This project welcomes contributions and suggestions. See Code of Conduct and Contributing for more details.