-
Notifications
You must be signed in to change notification settings - Fork 711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a reproducible Bazel build to CI #1243
Comments
Realized that even if the headers requiring a sysroot in ring are removed, we'll still need one, because rules_rust uses a little bit of C++ that requires |
Is there any way to do this incrementally? What I'd really like to try is to move the entire build system, including all the Perl and preassembly stuff in particular, into Bazel, leaving a much simpler build.rs. That is, I want to remove the "if building from a Git checkout ( It would be nice if we could get to that point, where the last step of the Bazel build is literally I'm also trying to figure out how I could do a |
Also, I want to remove all the dependency checking from build.rs. It doesn't seem to be 100% correct in my experience anyway. |
Yes, I think we'd want to start with a target/host combo of x64-linux, and leave Cargo as-is.
This can all be done with a platform select rule. This example chooses shared library suffixes, but you can parameterize anything this way: bazel-contrib/rules_foreign_cc#283 (comment) It turns into something more complicated during cross-compiles, but it's all doable.
I think it would probably be faster to generate BUILD files from Cargo at first, using |
Here is what
I had to give |
Is it possible/practical to store the intermediate results of a Bazel run somwehere where they could be restored for the next run? understand that in a Bazel world, I would use Bazel to download all the dependencies like the Rust toolchain and nasm and other things, only when they need to change. Ideally, I would run each such task up to the point where the downloads have been done, and then store the state with the downloaded files somewhere, so that my CI would never depend on doing that download. Or, if I wanted to do that, would I be better off writing a script that downloads everything that needs to be downloaded, and then storing all that in a repo (or docker image) and then having the Bazel build read from that repo when it needs a dependency? In that case, I guess I could use Bazel just as a (secure) downloader to "build" that repo. |
Another question I have: It seems like Bazel Buddy and maybe other services have naturally better support for Bazel-based builds. But, as a fallback to them, and before I can really even try them out, is there a way to get GitHub Actions to run a Bazel-based build with the same or better performance as the current GitHub Actions configuration? I'm guessing that at least to start, this would be a clean initial Bazel build for each build. Is the initial clean build under Bazel likely to be significantly slower than what we're doing now? |
No, you'll only pay the price to install bazel and start it, which should be a few seconds. I've only tried this method on Travis, but it worked fine. Something like BuildBuddy will be necessary to get really good caching of intermediate builds and test results. |
This is what Bazel calls "Remote Caching" and the more-ambitious "Remote Build Execution". Basically, the root WORKSPACE file records all of your toolchains (including Rust) with a SHA256 hash, and all of the build products and test results depend on that. If you change your Rust version in the WORKSPACE file, then you won't get many cache hits. |
It seems like what I'm trying to do to bootstrap this in GitHub Action is a Bazel "offline build" where I would use The thing I'm trying to achieve with this is "Have a single place under my control where the exact external dependencies are stored, which allows me to trace over time how the dependencies change, and which doesn't require me to ever access a third-party server (beyond GitHub) unless/until I need to update to a new version of a tool." |
This is possible using I'd suggest starting with something more basic, like my link. You can then use I guess the question is whether a GitHub tar.gz with a SHA256 (which Bazel will enforce) is enough to start. That way, your tools can't change unless you update the hashes / URLs. |
Regarding the clang dependency, and its libxml2 and libtinfo5 dependencies, here's my current thinking: We should prefer to use the official clang build over building it ourselves, if practical. It'd be hard to verify that a built-from-source clang was built correctly. I read that the libxml2 dependency is only needed when building from source. So, using the official binary would potentially solve the issue with libxml2? For terminfo, it is possible to disable the dependency when building from source using |
The crustls patches I posted do that, indirectly, through a Bazel rule.
I don't think that is the case. Using the official binaries, I believe they require libxml2.so for (unused) profiling features. I believe terminfo is similar. Ideally, clang would provide official builds without these dependencies. Maybe they do--I have not looked. I got things down to this Dockerfile, which has no C or C++ compiler, but downloading an official Clang build still required the two shared libraries. The way to test this would be to try and run an official clang build on |
I asked the LLVM list about this, and they do not provide official builds without these dependencies. The choices: 1.) Build LLVM without these libraries (doable in CMake, they said)
3.) Build these libraries for each Bazel platform and check them in somewhere. I think I'd recommend #2, checking the docker image into git if you want. |
I would like this in the abstract but my experience with Bazel back when I was working on this was kind of frustrating, and I haven't been following it. We also should consider buck2 and others. So I'm closing this since nobody is working on it. But we did learn a lot trying this. |
This build should run on at least x86_64 Linux, use a sysroot, and run twice. Once from a Bazel RBE cache, and once from scratch. The resulting artifacts should then be compared and checked that they are identical.
A combination of Grail's toolchain rules and this sysroot should do the trick. The Grail toolchain will allow choosing the LLVM version to target, and I think the approach in the
sysroot
repo is the right one: target whatever Chromium's minimum Debian is.So far, I have ring building on this Dockerfile using this approach, with main-branch copies of several Bazel tools. It probably doesn't make sense to do a PR until they tag releases with everything needed. It will take at least: Bazel itself, @rules_rust, and perhaps @rules_foreign_cc.
I'd like to cut down the
FROM
image to Debian and get rid of the two shared library dependencies, at least for release builds. Note that the Docker image does not containgcc
,clang
, C headers, or Rust dependencies.The text was updated successfully, but these errors were encountered: