Skip to content

Conversation

tsbinns
Copy link
Contributor

@tsbinns tsbinns commented Oct 17, 2025

Reference issue (if any)

Fixes #13449

What does this implement/fix?

Adds a script that updates version specifiers for selected dependencies according to SPEC0 compliance.

Also adds a workflow that runs this script on PRs and just pushes the changes. End goal would be for the workflow to run on schedule and create a PR with any changes.

Additional information

Right now is only handling dependencies, not Python versions.

Am using tomlkit rather than the built-in tomllib due to requiring writing capabilities.

The specifier parsing using the packaging tools works really nicely, but unfortunately spaces are not preserved, e.g., numpy >= 1.26 becomes numpy>=1.26. For consistency, the script is applying this formatting to all pinned dependencies, even if they are not being changed. I'm open to changing/refining this.

Had to do some roundabout way to find the target branch to push to, as the zizmor pre-commit check was warning of code injection via template expansion with TARGET_BRANCH="${{ github.head_ref }}".

Also had to update the yamllint rules for the pre-commit check to pass on my Windows machine (had the same issue in MNE-Connectivity before: mne-tools/mne-connectivity#198).

import warnings
from datetime import timedelta

import pandas as pd
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pandas is a bit overkill, I just carried it over from the SPEC0 example scripts. I can refine this once everything's working.

@tsbinns
Copy link
Contributor Author

tsbinns commented Oct 17, 2025

Workflow is failing because the permissions aren't right to push the changes to this PR. I've been reading up on this, but can't seem to figure a solution out without adding a new PAT secret to the repo.

@drammock
Copy link
Member

xref to a couple things that (1) this might break (?) and (2) might have some code you could steal:

pre-commit hooks that parse pyproject.toml (and copy some things from it into other places)

# dependencies
- repo: local
hooks:
- id: update-env-file
name: Copy dependency changes from pyproject.toml to environment.yml
language: python
entry: ./tools/hooks/update_environment_file.py
files: '^(pyproject.toml|tools/hooks/update_environment_file.py)$'
- repo: local
hooks:
- id: dependency-sync
name: Copy core dependencies from pyproject.toml to README.rst
language: python
entry: ./tools/hooks/sync_dependencies.py
files: '^(pyproject.toml|tools/hooks/sync_dependencies.py)$'
additional_dependencies: ["mne==1.10.0"]

In particular, for our README we re-write the pins to look pretty, this could be adapted maybe:

def _prettify_pin(pin):
if pin is None:
return ""
pins = pin.split(",")
replacements = {
"<=": " ≤ ",
">=": " ≥ ",
"<": " < ",
">": " > ",
}
for old, new in replacements.items():
pins = [p.replace(old, new) for p in pins]
pins = reversed(pins)
return ",".join(pins)

@tsbinns
Copy link
Contributor Author

tsbinns commented Oct 20, 2025

@drammock Have pushed some changes to have the formatting better reflect what there is currently.

For this messing with the pre-commit hooks and making changes after they have been run, is there a way that this action changing pyproject.toml could re-trigger those hooks?

@larsoner
Copy link
Member

For this messing with the pre-commit hooks and making changes after they have been run, is there a way that this action changing pyproject.toml could re-trigger those hooks?

You can always pre-commit run --files pyproject.toml in the action. Does that do what you want?

Or really you can let the action push to a branch and make a PR, and as soon as that happens pre-commit.ci will run and push a commit as well

Copy link
Member

@drammock drammock left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a few things I noticed when skimming the script

return releases


def update_specifiers(dependencies, releases):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this func converts, e.g., matplotlib >= 3.8 to matplotlib >= 3.8.1

I'm not sure we actually want to pin at the patch level... for some dependency resolvers I think this would prevent ever getting 3.9.x, 3.10.x, etc.

Copy link
Contributor Author

@tsbinns tsbinns Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is something I also wondered about. In the SPEC0 example script, patches are ignored for determining the release (and deprecation) date.

But, I switched to considering patches for cases where the unpatched version is >2 years old, but there is a more recent patch. E.g., with pyvistaqt, the most recent version is 0.11.3, but 0.11.0 was released >2 years ago (June 2023). The default script was therefore not finding any compatible version. I get why pinning at the patch level is not the best though.

Another option is to use patched versions to determine within-date unpatched versions (so pyvistaqt 0.11.1 would be okay, even if this is >2 years old, and we just pin pyvistaqt >= 0.11).
Also a question of whether we do this for all packages, or only those which have no within-date unpatched versions. E.g., matplotlib 3.9 is within-date (May 2024), but 3.8 isn't (Sept 2023). However, 3.8.4 is within-date (April 2024), so should 3.8 be supported?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to add, if we only allow patch versions to count for packages with no within-date base versions, then e.g., numpy 1.26 would no longer be supported (released Sept 2023).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasoning through it:

  • we're trying to save ourselves the burden of maintaining compatibility with very old versions of our deps
  • we should assume that our deps don't break compatibility with their patch releases
  • we should assume that dependency solvers will generally use the newest patch of a given major.minor
  • therefore:
    • if there's a 3.8.x that was released less than 2 years ago, we should not drop the 3.8.x series, effectively saying "MNE should still work with 3.8.0" even if 3.8.0 was released more than 2 years ago. I think in (almost) all cases that should be fine
    • spelling the pins as major.minor instead of major.minor.patch conveys that intent/understanding
    • I think the upshot is that when calculating drop date, we should use the latest (not earliest) release date within a given major.minor series

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, I'll switch to this.

Copy link
Member

@larsoner larsoner Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my mind the idea is mostly to support far enough back for end users. So if an end user 2 years ago today (2023 Oct 20) installed with everything at the latest released version, what would they be using? That's where we should set our minimums. NumPy 2.0 wasn't out until 2024/06/16, so they would have had a 1.26.x variant (1.26.1 from 2023 Oct 14) installed, so that's where our min pin should be (maybe whittled down / simplified to 1.26) as of today.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... but I also recognize this is not what SPEC-0 actually says in their table. I'd rather support "2 years back" according to this "what would users actually have installed" definition, but I can live with the more strict SPEC0 definition as well

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as you can probably tell I'm quite conflicted between "be accommodating of our users" and "follow the guidelines developed with our peers". I suppose if I reasoned through it above and came to the same conclusion as @larsoner just did, then probably we should just buck the SPEC-0 advice, and state that we support "dependency >= X.Y where there exists a version X.Y.Z that was released less than 2 years ago". I'll hide a couple of my prior comments that said otherwise, and consider it settled AFAIC.

Copy link
Member

@larsoner larsoner Oct 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, me too! I think it's closer to the kind of support our end users have experienced in terms of when we bump minimums. And for us it's not too onerous to stick to I think.

To clarify slightly, I think there are slight potential differences between the two potential rules:

  • "support dependency >= X.Y where there exists a version X.Y.Z that was released less than 2 years ago"
  • "support as a minimum the largest X.Y for which a X.Y version was available 2 years ago"

Specifically I'm thinking of two cases:

  1. Overlapping support windows, if anybody ends up using them. For example, if NumPy decided to cut a 1.26.5 today for some critical security fix, then the first rule would mean we'd have to wait until 2027 Oct 20 to bump our NumPy minimum to 2.0. Under the second rule, we'd bump 2 years after 2.0.0 came out, so on 2026 June 16.
  2. Long times between releases. To illustrate this, let's say that we had a dependency on something that releases infrequently, like mne-gui-addons. It has one release, 0.1 over two years ago (2023 March). If they cut a 0.2 release today, the first rule would mean we would immediately require version 0.2. The second rule would mean we would require 0.2 two years from today. This isn't a huge deal right now for us because most of our deps release fairly quickly (it would result in a ~4 mo difference generally for NumPy and SciPy from a quick look) but it does put us in a position where deps delaying releases effectively shortens our support windows for end users.

This makes me like slightly prefer a definition based on "what was the newest X.Y.Z version available 2 years ago" rather than one based on a "oldest release X.Y.Z that was cut less than two years ago", but I can live with either of these as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if NumPy decided to cut a 1.26.5 today for some critical security fix, then the first rule would mean we'd have to wait until 2027 Oct 20 to bump our NumPy minimum to 2.0. Under the second rule, we'd bump 2 years after 2.0.0 came out, so on 2026 June 16.

Yeah OK that makes sense.

Long times between releases. [...] The second rule would mean we would require 0.2 two years from today.

yeah that seems also to favor your formulation of the rule.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Automate SPEC0 adherence for dependency version management

3 participants