Skip to content

Pkg3: conservative compatibility will make it harder to access new features #3

Open

Description

[sorry for wall of text - leaving town this weekend, want to write initial reactions down while fresh]

A package should not declare compatibility with a minor version series unless some version in that series has actually been published – this guarantees that compatibility can (and should) be tested. If a new compatible major or minor version of a package is released, this should be reflected by publishing a new patch that expands the compatibility claims.

So this is requiring upper bounds, and in a strict way where the bound must already exist (though unlike current upper bounds, these would be inclusive). While I like that idea in theory - only allowing package dependency versions to be installed that are known to work - it trades the "the new release of package A broke package B" problem for a smaller feasible set of allowed versions. Users of widely depended-on packages will be held back to old versions if they also want to use any dependent packages that update slowly. If both package B and package C depend on package A, package B hasn't tagged since A released version 1.4, and package C depends on a new feature in package A that was first released in version 1.8, you won't be able to use both package B and package C in the same environment until package B has tested and tagged a 1.8-compatible version.

Automatic testing of reverse dependencies and making an automatic set of new downstream tags with wider bounds if tests pass could help here, but we don't have that infrastructure yet (and requiring such infrastructure in order for a set of packages to progress cohesively may be a burden to place on organizations that want to run their own registry). For packages that can't be tested on CI, or start failing in the automatic test results, then you start needing to involve the authors of even sporadically-developed packages any time their dependencies put out new feature releases that people want to use.

We'd also need much better error messages and suggested fixes when dependency resolution fails to find a feasible set of versions. Resolution failure is luckily pretty rare right now, but can be very confusing when it happens. Downgrading or being held back to old versions does happen now with upper bounds, and being stricter about them would make that more common. If a set of versions that are known to work can be installed we should do that, but I fear the choice will often be between allowing untested versions to be installed or erroring when the user tries to do so. When those are the only choices, I think the former has a higher chance of allowing the user to get things done (or figure out how to fix the problems).

https://www.well-typed.com/blog/2014/09/how-we-might-abolish-cabal-hell-part-1/ is worth at least skimming, as Haskell's ecosystem has gone through many similar issues. By going from the current scheme of only applying upper bounds when problems are already known or the package author is choosing to be conservative, to a scheme where strict tested bounds are the only thing that's allowed to be released, we're moving from the Julia equivalent of "Type errors" or "Compile errors" to the equivalent of "solver failure."

Lastly, this makes package authors' adherence to semver (or lack thereof) much more consequential. We do want to encourage these processes and get people thinking more about them, but I think social expectations and more thorough documentation are safer ways to get there right now than baking them into the behavior of the package manager.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions