-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declare dependencies in metadata #2764
Conversation
…ersion when needed. Doesn't work because pkg_resources can't load entry points from a distribution present only as a wheel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'd prefer the vendored blobs not to be checked into Git but rather gitignored. There could be a script that would populate them separately. Thinking along the lines of what I attempted in python/cpython#12791.
I'm still unsure why the test failures are occurring. They don't occur for me locally, even on Python 3.6. The failures are all in the |
Running tests in a Docker VM (Ubuntu), I'm able to replicate the failures on all Pythons 3.6-3.9. It does pass on Python 3.10, but only because the failing tests get skipped due to #2599. |
The failures have at least two modes. On pip 9.0.1, it fails early checking for setuptools being installed. On 18.3, the tests fail when attempting to install the dependencies:
I believe the issue is that prior to pip 20, there wasn't adequate support for building source packages without setuptools being implicitly installed. I think I'm prepared to declare non-support for installation from source with these older pips. |
I believe the reason why the tests don't fail for me locally is because I have a local pip cache with things like 'ordered-set' already built into a wheel. Edit: No, it's not that. I checked and 'ordered-set' is not in the cache and still installs on macOS. |
I'd consider something along these lines. I think we can consider that change separately, no? |
c3da370
to
95e3d3e
Compare
FFS. Now tests are failing with some error in flake8.
|
95e3d3e
to
6c137ff
Compare
Aaah. It's because there was a lint error (unused import) and the error handling between pytest and flake8 is broken (PyCQA/flake8#1419). |
Now two tests (those depending on |
Strangely, when I create a virtualenv locally, I don't end up with pyparsing in it. Is it |
Okay, I see it now. pyparsing is present in
So why is a package from the system site packages leaking into a (bare) virtualenv? |
Ugh. The problem is that
|
Okay, so in pytest-virtualenv, it's crucial to pass |
That issue should be fixed in 083fc74. |
I want to release this change exclusive to other changes to allow it to be yanked or backed out, as I suspect there's a high probability of creating regressions. I did try implementing the _vendored directory with simply the |
Change the vendoring mechanism separately? Yes. Do so after merging this PR? No. Git will accumulate any changes to blobs basically adding to the Git tree size both when adding and when removing those binaries, also when updating. So it's better to do this sooner than later. Best if this is done before this PR merge so that it's rebased on top and doesn't negatively impact the bare repo size. |
Point taken, but what happens if the technique for bundling dependencies has negative downstream effects? Then the only option is to back out the whole change and not just the bundling behavior. I agree that it's undesirable to have this bulk content in the repo, so I would like to avoid that. I'd also like to be able to support pip install from Github, e.g.:
It's not obvious to me how that could work if the content isn't already bundled in the repo. Plus, I observe that the content is currently already bundled in both I can rewrite the history to remove .whl files for now, as they're unusable. |
… aren't littering the repo.
db2f809
to
62b5d9e
Compare
That's an interesting question. Note that downstream distro packages absolutely love unbundling the vendored deps and depend on packages that are already packaged for their ecosystem. So I'd say they already have to deal with stuff like this.
May I suggest implementing the download logic + caching in the PEP 517 in-tree build backend? If the dirs are already populated, it'd skip the download (after checking the checksums, I guess). P.S. Apparently pip's vendoring script is now available on PyPI: https://github.com/pypa/pip/tree/main/src/pip/_vendor#automatic-vendoring / https://pypi.org/project/vendoring/. Maybe it's worth checking it out.
Yeah, I'm mostly unhappy with things like whl/exe at this point. It'd be great to catch these with linters. Maybe there's some pre-commit hook repo disallowing blobs? |
setuptools/__init__.py
Outdated
@@ -13,6 +13,7 @@ | |||
|
|||
from ._deprecation_warning import SetuptoolsDeprecationWarning | |||
|
|||
import _setuptools_vendored # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend adding a code comment with an explanation of what this does and why it's important to keep it in this specific position.
@@ -0,0 +1 @@ | |||
Setuptools now declares its dependencies in metadata but also vendors libraries satisfying those dependencies if they're not present. This change means that Setuptools cannot install from source under pip earlier than 20. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hroncok FYI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the ping. Will read the description and discussion here, but this sounds terrifying, complicated and quite error prone.
In Fedora, the resulting RPM package with setuptools would hard-depend on the declared packages. That means that users who need setuptools installed on runtime would need to have e.g. old packaging installed (and we have already updated packaging). This means we are in dependency hell. We would need to revert to an older version of packaging and never update again until setuptools supports it. This could happen again with any of the other deps in the future. OTOH If we hack around that and replace the hard dependencies with weak ones, setuptools would only work properly if at least one dependency is missing. Otherwise, the check that checks whether the vendored dependencies are present would find the newer installed packaging and not add the vendored one to sys.path, leading to unexpected behavior. Again, this could happen with any other dep in the future as well, or if an extremely old version of something is installed. Both could happen in any other environment (e.g. In virtual environments). Consider a new venv/virtualenv with setuptools installed in it, which is the status quo (and at least for venv it seems to bw the desire for the future as well). Would ensurepip need custom hacks to install the setuptools wheel without deps? Users who pip upgrade after venv creation would then get all the deps installed and they will have dependency clashes when they need e.g. new packaging. Even if the check uses importlib.matadata to check that proper versions are installed (it really should do that), it means we would always need to ship the bundled libs, as we never know if they are needed. That essentially combines the bad things of both vendoring and declaring deps:
tl;dr I don't understand why are you considering this. It feels worse than the status quo. Even if you disagree with my point of view, I kindly ask you to wait shipping this until next week, so I can discuss this with the Python SIG in Fedora to gather more input. Thank you. (Excuse the typos, I wrote this on my phone.) |
Good point. I thought it was obvious, but it's clearly not, so I at least owe you the courtesy of writing up the motivations. I'll do that in a separate issue.
For sure. I'm happy to move somewhat cautiously on this one, because it's likely to break things and I want to avoid having to roll it back. |
In #2825, I've collected (on the fly) the motivations that have driven me in this direction.
Agreed that's a problem, but it's hardly solved by having vendored dependencies. Instead, the vendored dependencies mean there are multiple copies of possibly incompatible dependencies, masking the conflict. I'd like for Setuptools to get first-class support for dependencies, including continuous updates (unpinning), conflict detection, and code sharing. This will mean that Setuptools pinning to a specific packaging version will cause conflicts, but it will make those conflicts apparent and drive the effort to resolve them as any package must for its dependencies. I would expect a downstream integrator to reject installing Setuptools into an environment where Setuptools declared dependencies couldn't be satisfied. Fortunately, for packaging, we have #2822 that promises to unpin the requirement on Packaging.
It's separate from this issue, but I would encourage users to aggressively start passing
No - I would expect these environments to get the setuptools deps. I expect the vendored dependencies to be used very rarely (primarily only long enough for Setuptools to build its own metadata so the installer can know the dependencies to install to allow setuptools to build with true dependencies). This all makes me wonder if this experiment would be easier if setuptools instead used another build backend like flit. I wonder if that would side-step the issue and break the chain.
I considered that, and I agree. Unfortunately, there's a separate bootstrapping issue for anything that's in the _setuptools_vendored logic, so the I'm going to move this back into draft mode. I'm not comfortable merging it yet, and I'd like to explore the flit backend approach. My immediate motivation is to get access to importlib_metadata and importlib_resources so that setuptools can loosen its dependency on pkg_resources. My long term motivation is that Setuptools can be a straightforward package without specialized dependency management. |
Thanks. I've read that and I could see the motivation for wanting to avoid vendored dependencies altogether. But I don't think this way of doing it is helping it in any (except to see things breaking while trying).
It is solved by vendored dependencies. Yes, vendored dependencies have different kinds of problems, but they do solve the dependency hell.
I understand. However, starting this effort with dependencies already pinned is a way of doing it that'll cause troubles. For explicit Fedora problems, see the next point...
Indeed. That is exactly what would happen in Fedora. Note that we only ship a single version of setuptools in a repository (this is a simplification, but it behaves like that), so if we decide to update setuptools to a version that introduced a pinned dependency on older packaging, we can only install that version of setuptools and considering our RPM build environment always installs packaging, all our packages that require setuptools to build would become unbuildable until this conflict is resolved. Meaning we would not be able to update setuptools to this version at all until then.
I'm glad we do, but this will happen agian.
Right, I do like that approach as well, but see https://mail.python.org/archives/list/python-dev@python.org/thread/3BVAUIQOEOXAULHVYQNLLQIZQQETX2EV/
That'll mean that ensurepip and virtualenv will have to add additional 6 wheels contained in them. Are the maintainers of ensurepip and virtualenv aware of this?
Have you considered not installing the vendored dependencies with setuptools, but only containing them in the sdist part of the distribution to allow building setuptools? That seems like a much nicer approach than always shipping them within the setuptools installation.
I beg you not to. This would only make the chain much larger and much more painful to handle.
During PEP 517 builds, they will not be present at all, as they are not listed in pyproject.toml, so the vendored dependencies will be used: Line 2 in 62b5d9e
During non-PEP 517 builds, I consider the chance of a conflicting version of at least one dependency quite big.
I am glad that you are not considering merging this now, but I am worried about the flit idea.
It might be easier to wait for Python 3.7 EOL.
That is a very nice idea for the future, but I honestly think this PR is not leading there. |
@encukou @pradyunsg @FFY00 Could you please have a look at this? I mean not only the proposed change, but also the discussion. Thanks a lot, folks. |
I thought about how to make this simpler. Could it be better to remove some deps first?
I don't know if there are plans to use more deps (or use the existing ones more), but it might make sense to minimize the problem before this change. |
I've responded on #2825. tl;dr -- please don't do this yet + consider adopting |
I'm abandoning this effort for now. |
I'm attempting once again to allow Setuptools to declare dependencies. Due to known bootstrapping issues, it's not possible to declare simply the dependencies (especially any dependencies that require setuptools to install) without considering the bootstrapping issue.
This change offers a fallback to add a directory of dependencies to be added to sys.path when Setuptools/pkg_resources is imported/invoked.