Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert CUDA 12.8 shared workflow branch changes #6282

Merged
merged 1 commit into from
Jan 31, 2025

Conversation

vyasr
Copy link
Contributor

@vyasr vyasr commented Jan 31, 2025

This PR points the shared workflow branches back to the default 25.02 branches.

xref: rapidsai/build-planning#139

@vyasr vyasr requested a review from a team as a code owner January 31, 2025 07:03
@vyasr vyasr requested a review from bdice January 31, 2025 07:03
@vyasr vyasr added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function and removed improvement Improvement / enhancement to an existing function labels Jan 31, 2025
@betatim
Copy link
Member

betatim commented Jan 31, 2025

I don't think the failure in https://github.com/rapidsai/cuml/actions/runs/13068021505/job/36466764589?pr=6282 is related to the changes in this PR. However, we also haven't merged anything in the last few days -> the test result shouldn't have changed from e.g. #6266.

This is the output:

   File "/opt/conda/envs/test/lib/python3.12/site-packages/pluggy/_manager.py", line 120, in _hookexec
    return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/test/lib/python3.12/site-packages/pluggy/_callers.py", line 139, in _multicall
    raise exception.with_traceback(exception.__traceback__)
  File "/opt/conda/envs/test/lib/python3.12/site-packages/pluggy/_callers.py", line 103, in _multicall
    res = hook_impl.function(*args)
          ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/conda/envs/test/lib/python3.12/site-packages/_pytest/python.py", line 194, in pytest_pyfunc_call
    result = testfunction(**testargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/__w/cuml/cuml/python/cuml/cuml/tests/test_kmeans.py", line 174, in test_weighted_kmeans
    assert cu_score - sk_score <= cluster_std * 1.5
AssertionError: assert (-2417.9619140625 - -14819.1552734375) <= (1.0 * 1.5)

At first glance both the cuml and scikit-learn estimators use a seed, as does the data generation. For scores the rule is "bigger is better", so in this case I think the cuml score is better. I remember talking about scores, cuml score's being better and CI failing because of it with @dantegd a while back. Do you remember where that was? Should we apply the same fix here?

Though it is curious that the results of the two estimators would be this different, all of a sudden.

@bdice
Copy link
Contributor

bdice commented Jan 31, 2025

/merge

@bdice
Copy link
Contributor

bdice commented Jan 31, 2025

@betatim Your observations sound right, I'll leave it to you and @dantegd to figure out what to do. It seems the test failures are in optional jobs so I hope they do not block this merge.

@AyodeAwe AyodeAwe merged commit 1eb52c7 into rapidsai:branch-25.02 Jan 31, 2025
70 of 74 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function non-breaking Non-breaking change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants