You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should really have a more robust CI for bpftool.
Current status
Build
GitHub CI builds: Building bpftool on Ubuntu, running on PRs and pushes on this repository. Mostly help checking that everything seems in order when synchronising the mirror.
test_bpftool_build.sh: Builds bpftool as part of BPF selftests (hence BPF CI). Ensures that basic build from a number of locations in the kernel repo and with or without passing an output directory still works as expected.
Selftests
Part of the BPF selftests (and running as part of the BPF CI).
test_bpftool_synctypes.py: Checks that various definitions in kernel headers, bpftool sources, bpftool docs and bash completion remain in sync. Not part of BPF selftests, but explicitly run as part of the BPF CI.
Some other workflows in the current repository, not directly related to the bpftool application.
Wish list
There is a variety of components that we would like to test. The list below is mostly a brain dump. Ideally, we want as much as possible as this testing to happen as part of the BPF CI, meaning we probably want to upstream it and make it part of the BPF selftests (Or have a dedicated repo for bpftool CI and see with BPF CI folks, to do something similar to libbpf's CI? Maybe for building bpftool, but testing that features work correctly should really go upstream).
Build
Try various feature sets (LLVM library vs. libbfd, with and without the other features...).
Try various kernel versions. In particular, we're having some issues when bpftool uses new definitions in skeletons for the BPF programs the binary relies on.
Try various distros/arch. It would be nice to cover aarch64, also something big-endian.
In the future: Windows?
Selftests
Check that all supported program and map types remain supported, by trying to load minimal objects of each type. I had something in progress on a dedicated branch but I never finished it.
Check that most/all commands behave as expected. This will require quite some work, because:
We must create a bunch of BPF programs (and related objects) that we can use for covering all BPF commands.
We must set up the host to be able to observe whatever we need to validate that the commands are working.
Everything we set up for introspection (or retrieving prog/map IDs etc.) should ideally not rely on bpftool itself (or libbpf)? So that if set up is broken we can still list objects. Although the alternatives to libbpf are mostly Go and Rust libraries and I don't see us introducing them in the CI.
We already have a lot of commands and options!
A code coverage tool could be helpful at some point, if we manage to cover a significant portion of the command list.
Misc
Simplify docs/sources to remove as much as possible the need to sync each time new types are added. Some work has been done in that direction already, but maybe we can improve more.
Add tests for bash completion, likely based on what the project does (see tests/ and GitHub workflows).
The text was updated successfully, but these errors were encountered:
We should really have a more robust CI for bpftool.
Current status
Build
Selftests
Part of the BPF selftests (and running as part of the BPF CI).
bpftool feature probe
.Miscellaneous
Wish list
There is a variety of components that we would like to test. The list below is mostly a brain dump. Ideally, we want as much as possible as this testing to happen as part of the BPF CI, meaning we probably want to upstream it and make it part of the BPF selftests (Or have a dedicated repo for bpftool CI and see with BPF CI folks, to do something similar to libbpf's CI? Maybe for building bpftool, but testing that features work correctly should really go upstream).
Build
Selftests
Check that all supported program and map types remain supported, by trying to load minimal objects of each type. I had something in progress on a dedicated branch but I never finished it.
Check that most/all commands behave as expected. This will require quite some work, because:
A code coverage tool could be helpful at some point, if we manage to cover a significant portion of the command list.
Misc
tests/
and GitHub workflows).The text was updated successfully, but these errors were encountered: