Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve dependency resolution performance by not clearing caches when backtracking #7950

Merged
merged 4 commits into from
May 23, 2023

Conversation

chriskuehl
Copy link
Contributor

This PR improves dependency resolution performance for conflicts involving packages with many versions. It works by preserving parts of the DependencyCache and the contradicted_incompatibilities caches when backtracking.

On my machine with fully populated caches, the performance of poetry update urllib3 (in a repo where this causes conflicts with boto3/botocore) goes from ~58 minutes to ~3 minutes with these changes.

Background

I've been investigating reports of slow dependency resolution after urllib3 2.x was released, similar to #7856 (comment). On my machine it takes almost an hour for Poetry to find a solution even with populated caches. I noticed that the time spent evaluating each solution increases a lot (evaluating the 10th solution takes 20ms/solution; by the 1000th solution it's taking 7000ms/solution).

After a bunch of profiling, I found two possible improvements around preserving these two caches. Currently both of these caches are cleared when backtracking which is causing a huge amount of duplicate work (almost all of the time is spent in VersionRange.allows according to flameprof).

Some reading of the algorithm docs suggests to me that we could improve this a lot by reverting the cache states back to the state of the decision level we backtrack to rather than wiping out the cache together. I updated DependencyCache and the contradicted_incompatibilities to track what decision level entries are added in and delete only the rolled back levels when backtracking.

The two cache updates are independent changes; I made them in two separate commits so that they can be merged or tested individually if desired.

Performance improvements

All changes were tested using this reproduction repo and a warm cache using poetry install && poetry update -vv urllib3. All resulted in the same final solution after 1116 solutions evaluated.

Branch Seconds in version solving
master 3482
DependencyCache only 2688
contradicted_incompatibilities cache only 962
Both caches 192

Here is a graph of time taken per solution evaluated:
Graph of time taken per solution evaluated

You can see that time taken to evaluate each solution still increases, but much more slowly (~200ms at solution 1115 compared to ~9000ms on master).

Does this actually work?

I think it does, but I am not 100% sure and could use help from someone with a better understanding of the algorithm to confirm.

To try to verify, I implemented a test mode which runs both the old and new cache behavior and asserts the results of each cache operation are identical: https://github.com/chriskuehl/poetry/commits/dependency-resolution-perf-with-tests. I tested this in a bunch of repos and randomly added/updated/removed big dependency trees with only one difference discovered which I'll describe below. From the linked branch you can set VERIFY_INCOMPATIBILITY_CACHE=1 and VERIFY_DEPENDENCY_CACHE=1 when running Poetry operations to enable the assertions.

The one difference from status quo I found

I found only one difference when asserting the results from the two caches: DependencyCache.search_for can return pre-release versions (for dependencies which do not allow them) on cache misses, but does not return them on cache hits. This happens because (even on master) the upper branch here does not filter out prerelease packages (it filters by the version range constraint only), while the bottom branch filters them out:

packages = self.cache.get(key)
if packages is None:
packages = self.provider.search_for(dependency)
else:
packages = [
p for p in packages if dependency.constraint.allows(p.package.version)
]

I don't think this is a problem since they will be discarded later, but it does make my assertions sometimes fail as the caches return a different result now that the updated cache has a much higher hit rate. If I temporarily apply this patch then all assertion errors go away:

diff --git a/src/poetry/puzzle/provider.py b/src/poetry/puzzle/provider.py
index 3c3b738b..ba91c420 100644
--- a/src/poetry/puzzle/provider.py
+++ b/src/poetry/puzzle/provider.py
@@ -289,7 +289,10 @@ class Provider:
             return PackageCollection(dependency, packages)
 
         packages = self._pool.find_packages(dependency)
+        packages = [
+            p for p in packages
+            if not p.is_prerelease() or dependency.allows_prereleases()
+        ]
         packages.sort(
             key=lambda p: (
                 not p.yanked,

This makes me think it is not an issue? Note that I did not add this patch to this branch and instead kept the status quo behavior.

@radoering
Copy link
Member

Wow, this looks quite promising. However, in some cases, it seems dependency resolution takes significantly longer now.

Example pyproject.toml
[tool.poetry]
name = "app"
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]

[tool.poetry.dependencies]
python = "^3.10"
beautifulsoup4 = ">=4.7.1"
boto3 = ">=1.22.12"
botocore = ">=1.25.12"
celery = ">=4.4.7"
click = ">=8.0.4"
confluent-kafka = ">=1.9.2"
croniter = ">=0.3.37"
datadog = ">=0.29.3"
django-crispy-forms = ">=1.14.0"
django-pg-zero-downtime-migrations = ">=0.11"
Django = ">=2.2.28"
djangorestframework = ">=3.12.4"
drf-spectacular = ">=0.22.1"
email-reply-parser = ">=0.5.12"
google-api-core = ">=2.10.1"
google-auth = ">=1.35.0"
google-cloud-bigtable = ">=2.11.3"
google-cloud-core = ">=2.3.2"
google-cloud-functions = ">=1.8.1"
google-cloud-pubsub = ">=2.13.6"
google-cloud-spanner = ">=3.20.0"
google-cloud-storage = ">=2.5.0"
googleapis-common-protos = ">=1.56.4"
google-crc32c = ">=1.3.0"
isodate = ">=0.6.1"
jsonschema = ">=3.2.0"
lxml = ">=4.6.5"
maxminddb = ">=2.0.3"
mistune = ">=2.0.3"
mmh3 = ">=3.0.0"
packaging = ">=21.3"
parsimonious = ">=0.8.0"
petname = ">=2.6"
phonenumberslite = ">=8.12.0"
Pillow = ">=9.2.0"
progressbar2 = ">=3.41.0"
python-rapidjson = ">=1.4"
psycopg2-binary = ">=2.8.6"
PyJWT = ">=2.4.0"
python-dateutil = ">=2.8.1"
python-memcached = ">=1.59"
python-u2flib-server = ">=5.0.0"
fido2 = ">=0.9.2"
python3-saml = ">=1.14.0"
PyYAML = ">=5.4"
rb = ">=1.9.0"
redis-py-cluster = ">=2.1.0"
redis = ">=3.4.1"
requests-oauthlib = ">=1.2.0"
requests = ">=2.25.1"
rfc3339-validator = ">=0.1.2"
rfc3986-validator = ">=0.1.1"
sentry-arroyo = ">=2.4.0"
sentry-relay = ">=0.8.15"
sentry-sdk = ">=1.11.0"
snuba-sdk = ">=1.0.3"
simplejson = ">=3.17.6"
statsd = ">=3.3"
structlog = ">=21.1.0"
symbolic = ">=10.2.0"
toronado = ">=0.1.0"
typing-extensions = ">=3.10.0.2"
ua-parser = ">=0.10.0"
unidiff = ">=0.7.4"
urllib3 = { version = ">=1.26.9,<2", extras = ["brotli"] }
brotli = ">=1.0.9"
pyuwsgi = "==2.0.20.0"
zstandard = ">=0.18.0"
msgpack = ">=1.0.4"
cryptography = ">=38.0.3"
billiard = ">=3.6.4"
kombu = ">=4.6.11"
grpcio = ">=1.47.0"
hiredis = ">=0.3.1"
cffi = ">=1.15.0"
cssutils = ">=2.4.0"
cssselect = ">=1.0.3"
phabricator = ">=0.7.0"
selenium = ">=4.1.5"
sqlparse = ">=0.2.4,<=0.3.0"
Variant time for poetry lock in s
master 24
PR 39

In another project (unfortunately containing some private dependencies) time for dependency resolution increases from about 200 s to 360 s.

(Measurements with warm cache.)

@dimbleby
Copy link
Contributor

speculating wildly: my first worry on looking at this is the linear scan through the many caches. Perhaps that's what the problem is in the examples where things get worse

@dimbleby
Copy link
Contributor

(I think that what is implemented here could more compactly be a https://docs.python.org/3/library/collections.html#collections.ChainMap - but I don't think that would help with the linear scan, if that is indeed the problem)

@chriskuehl
Copy link
Contributor Author

Agreed, I think that's probably the cause. I initially wrote this change with one big set that I updated rather than the linear scan of many caches, but changed to this approach since the performance seemed the same in my tests and it was a bit simpler. Most of my tests were with a small number of packages though, so I bet it makes a much larger difference in these cases where the decision levels get much larger.

I'll take a look at this later today and see if that approach works better. Thanks for providing that repro!

@radoering
Copy link
Member

Regarding the difference with pre-releases

We allow pre-releases for a constraint that can only be satisfied by a pre-release even if pre-releases were not allowed explicity, i.e. dependency.allows_prereleases() returns False.

This happens because (even on master) the upper branch here does not filter out prerelease packages (it filters by the version range constraint only), while the bottom branch filters them out:

The upper branch uses RepositoryPool.find_packages() which uses Repository.find_packages(), which only returns pre-releases if they are allowed or there is no non-pre-release that fulfills the constraint:

if (
package.is_prerelease()
and not allow_prereleases
and not package.is_direct_origin()
):
ignored_pre_release_packages.append(package)
continue
packages.append(package)
self._log(
f"{len(packages)} packages found for {dependency.name} {constraint!s}",
level="debug",
)
return packages or ignored_pre_release_packages

I don't think the bottom branch filters them out per se, but only if they are not allowed by the constraint. The branch uses dependency.constraint.allows() to filter packages, which is also used in upper branch via

if package.name == name and constraint.allows(package.version)

If I temporarily apply this patch then all assertion errors go away

This change may be problematic if only pre-releases can satisfy a constraint.

Maybe, you can verify the pre-release filtering with an example constraint like black = "<22". (This constraint can only be satisfied by a pre-release of black.)

@chriskuehl
Copy link
Contributor Author

chriskuehl commented May 19, 2023

Wow, this looks quite promising. However, in some cases, it seems dependency resolution takes significantly longer now.

I've made two changes which should improve things and pushed these as separate commits. The main difference seemed to actually be the lru_cache hit rate on DependencyCache.search_for decreasing a lot due to the addition of the level parameter, so I've refactored that so that level is not needed in the cached function. I've also refactored both caches to not do linear scanning of per-level caches, so the performance should not get worse as the decision level increases anymore.

Here are the timings on my machine for your reproduction (testing with rm -f poetry.lock && poetry lock and averaging 3 runs):

Timings for repro from #7950 (comment)

Variant Seconds taken
master 15.3
Original PR 22.2
With improvement to contradicted_incompatiblities cache only 22.1
With improvement to DependencyCache only 15.4
With both improvements 15.2

For me this gets the performance back down to matching master, but if you're able to re-test with these changes (on this repro or your private repo) that would be appreciated. It could be that some combinations of CPU or IO speed cause this branch to regress timings on some machines but not others. You may want to try at least 2 runs as I noticed the first run is often 1-2 seconds slower for me.

As you can see in the table above, the contradicted_incompatiblities improvement doesn't make much of a difference for this scenario, but it does seem to make a small but noticeable difference with my original urllib3 conflict reproduction:

Timings for poetry update urllib3 repro from the PR description

Variant Seconds taken
master 3482
Original PR 192
With improvement to contradicted_incompatiblities cache only 170
With improvement to DependencyCache only 179
With both improvements 171

For now I decided to keep it due to the small above improvement, but if it causes problems it wouldn't hurt too much to remove.

(FYI, I updated my test assertions branch at https://github.com/chriskuehl/poetry/commits/dependency-resolution-perf-with-tests-v2 for the updated PR.)

@chriskuehl
Copy link
Contributor Author

chriskuehl commented May 19, 2023

The upper branch uses RepositoryPool.find_packages() which uses Repository.find_packages(), which only returns pre-releases if they are allowed or there is no non-pre-release that fulfills the constraint:

Ahh, thanks for the explanation. That helps a lot -- I assumed it must have been filtering those out in subsequent iterations, but it's just that the function above may return additional entries as the constraint gets tighter.

I think the existing cache implementation has some inconsistencies around pre-release package version which may make it hard to exactly match its behavior. The bottom branch does filtering only and doesn't have any kind of "if package version list is empty, add in compatible pre-release versions" logic, so its results may not match if the cache was disabled and we always called provider.search_for.

For example if I have a constraint on black that forces a pre-release version at the top level:

[tool.poetry.dependencies]
python = "^3.11"
black = "<22"

...then I can successfully poetry lock on master (it selects black==21.12b0).

But if instead I have the same restrictive black dependency declared at the second level, like this:

[tool.poetry.dependencies]
python = "^3.11"
black = "*"
# This is a wheel I created that has `black<22` as its only requirement.
pkg-that-requires-old-black = {path = "test-pkg/dist/pkg_that_requires_old_black-1.0.0-py3-none-any.whl"}

then poetry lock fails on master as well as on my branch:

  SolverProblemError

  Because pkg-that-requires-old-black (1.0.0) @ file:///home/ckuehl/proj/poetry-urllib3-dep-resolution-repro/test-pkg/dist/pkg_that_requires_old_black-1.0.0-py3-none-any.whl depends on black (<22) which doesn't match any versions, pkg-that-requires-old-black is forbidden.
  So, because app depends on pkg-that-requires-old-black (1.0.0) @ file:///home/ckuehl/proj/poetry-urllib3-dep-resolution-repro/test-pkg/dist/pkg_that_requires_old_black-1.0.0-py3-none-any.whl, version solving failed.

If I disable the DependencyCache, it works properly on both master and my branch (selects 21.12b0 again). This suggests to me that the cache might not be exactly correct on master either.

I think I could fix this in this PR by adding some logic like:

diff --git a/src/poetry/mixology/version_solver.py b/src/poetry/mixology/version_solver.py
index ec44b0fa..c149508c 100644
--- a/src/poetry/mixology/version_solver.py
+++ b/src/poetry/mixology/version_solver.py
@@ -82,7 +82,7 @@ class DependencyCache:
                 for p in cache_entries[-1]
                 if dependency.constraint.allows(p.package.version)
             ]
-        else:
+        if not cache_entries or not packages:
             packages = self.provider.search_for(dependency)

         return packages

That patch needs more testing and to verify its performance impact, but seems to work (fixes the install error for the repro above with pkg-that-requires-old-black and also works with plone = '*' which was the other dependency I was testing with).

Do you think it makes sense to add that to this branch? It does seem more correct to me, but it won't match 100% the behavior in master.

@radoering
Copy link
Member

For me this gets the performance back down to matching master, but if you're able to re-test with these changes (on this repro or your private repo) that would be appreciated.

I can confirm that the time for dependency resolution for the repro is as fast as on master. Testing with my private repo (multiple runs), it's still about 5 seconds slower than master. However, I think that's neglectable considering the resolution time of about 200 s.

But if instead I have the same restrictive black dependency declared at the second level [...]

Good catch. I'd say that's a bug.

That patch needs more testing and to verify its performance impact, [...]

I just tested the change with some projects and did not experience any performance regression. If it's the same for you, it should be good.

Do you think it makes sense to add that to this branch? It does seem more correct to me, but it won't match 100% the behavior in master.

It would probably be the best, if we add the fix and a test in a separate PR (before this one?). The fix should be similar without the performance improvement of this PR. A test in test_solver.py shouldn't be too hard. (I can assist with the test if desired.)

@radoering
Copy link
Member

By the way, I tend to keep the 4 commits separated when merging this into master, so please rebase (not merge) in case you want to update your branch (and don't squash the commits 😉).

@chriskuehl
Copy link
Contributor Author

It would probably be the best, if we add the fix and a test in a separate PR (before this one?). The fix should be similar without the performance improvement of this PR. A test in test_solver.py shouldn't be too hard. (I can assist with the test if desired.)

Created a new PR: #7978

src/poetry/mixology/version_solver.py Outdated Show resolved Hide resolved
src/poetry/mixology/version_solver.py Outdated Show resolved Hide resolved
@chriskuehl
Copy link
Contributor Author

@radoering radoering merged commit 9217975 into python-poetry:master May 23, 2023
radoering pushed a commit that referenced this pull request May 23, 2023
radoering pushed a commit that referenced this pull request May 23, 2023
@radoering radoering added impact/backport Requires backport to stable branch backport/1.5 labels May 24, 2023
@poetry-bot
Copy link

poetry-bot bot commented May 24, 2023

The backport to 1.5 failed:

The process '/usr/bin/git' failed with exit code 1

To backport manually, run these commands in your terminal:

# Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-1.5 1.5
# Navigate to the new working tree
cd .worktrees/backport-1.5
# Create a new branch
git switch --create backport-7950-to-1.5
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 92179755f8c16aa93d65ad708a4e6d3dbd15957c
# Push it to GitHub
git push --set-upstream origin backport-7950-to-1.5
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-1.5

Then, create a pull request where the base branch is 1.5 and the compare/head branch is backport-7950-to-1.5.

chriskuehl added a commit to chriskuehl/poetry that referenced this pull request May 24, 2023
chriskuehl added a commit to chriskuehl/poetry that referenced this pull request May 24, 2023
chriskuehl added a commit to chriskuehl/poetry that referenced this pull request May 24, 2023
chriskuehl added a commit to chriskuehl/poetry that referenced this pull request May 24, 2023
radoering pushed a commit that referenced this pull request May 24, 2023
radoering pushed a commit that referenced this pull request May 24, 2023
radoering pushed a commit that referenced this pull request May 24, 2023
radoering pushed a commit that referenced this pull request May 24, 2023
Copy link

github-actions bot commented Mar 3, 2024

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 3, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
impact/backport Requires backport to stable branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants