-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strategy for running CI-tests against GPU machines #227
Comments
The basic setup for Dask's gpuCI consisted of:
Once enabled, gpuCI runs on all PRs opened by members on an allowlist (in Dask's case, this contains all members of the organization as well as some other people added using the commands listed here); initially, there was some noisiness as the gpuCI bot pinged a large chunk of the open Dask PRs for approval to run GPU tests, but this should be avoidable here if we populate the allowlist before enabling. Another thing to consider is that gpuCI on Dask and Distributed uses Docker images (generated nightly based on specifications here) to speed up the overall test time; we might want to do something similar for dask-sql, but I'm interested in if it would be possible to reuse one of those images here. |
Closing this as we now have gpuCI set up for GPU testing |
With the recent integration of dask-cudf into dask-sql, it would be wise to start testing against GPU libraries and machines in the automated build triggered by GitHub actions.
@ayushdg mentioned on #220, that:
I would be very much interested, if this is also an option for dask-contrib packages (and if this would be supported by RAPIDS) and what the current experience with the setup at the dask main repository is. Maybe, @quasiben, @jrbourbeau, can you shortly summarize the current setup in the dask main repository and how the experiences are? Thanks!
The text was updated successfully, but these errors were encountered: