-
-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous Integration: setup SonarQube through Github for NUT, new QRT, etc #338
Comments
TL;DR: don't let me stop you. As has been pointed out several times on the Buildbot mailing lists, there are things that should be done in the code with scripts and makefile targets, and there are things that the CI tool should do. If there is overlap, the CI tool should invoke the script or call make. (It's sort of like separating content from presentation in HTML, or the MVC layers in applications.) For NUT, I think this means that about half of the suggestions you mentioned are applicable to any CI system (or more accurately, can be invoked by any CI system). More importantly, any user can invoke just the scripts they need while developing on their own system. I use Buildbot for other projects because it is self-hosting and does not depend on an external service, so I don't see myself doing much with Travis - there is too much overlap for little gain. There is also a buildbot_travis bridge that I haven't looked into much, but seems to build based off of the same I would recommend that you test out any dedicated CI users on your own repository first. There are a lot of things that can be done with GitHub API calls using API keys rather than users. I personally think that "bot" messages lower the signal-to-noise ratio of a pull request or issue, but YMMV. (In a similar vein, please ask before submitting email addresses to sign up for new services.) As far as distcheck/distcheck-light, I will admit that it seems silly to have each builder create a tarball through autoconf, only to throw it away (the asciidoc builds are the slowest part, if I had to guess). It means that we have to keep a lot more dependencies on each builder than a tarball user would need. I have a BuildBot branch here that attempts to create one tarball on a Debian builder, then it triggers the rest of the builders to download that tarball. That got shelved while reviewing the DMF branch, but maybe I can start looking at that again. It will allow us to test a wider variety of platforms with minimal platform setup. Travis configuration might not be able to handle this sort of test (it seems to have been designed for direct builds from a SCM repository) but I am also not particularly motivated to find out. Also, a lot of the QA items targeted by your suggestions are closer to unit tests than system tests. This is fine for newer code, but if you look at the mailing lists, that isn't where the problems tend to be. Real-world users are using older versions of NUT from their distributions, and are currently affected by system-level integration issues such as systemd dependencies (especially when things don't all start up cleanly). I can never get the XaaS acronyms straight, but I have been looking at automating a NUT install with Vagrant (uses VirtualBox, rather than the Docker images which are used in our Buildbot setup) in order to test against actual distributions. (I realize this doesn't help with the kernel/library-level USB issues, though.) |
@aquette @clepple : FWIW, I've just posted a PR #351 with initial setup for travis that passes make, make check, make install and make distcheck, you can see logs in my fork's selftest at https://travis-ci.org/jimklimov/nut This build includes generation of all doc formats, so even with ccache in place a run takes about 5 minutes. It may be reasonable to extend this at a later stage to do matrix builds (spawning several travis envs with different settings) so e.g. a test for code compilability (--with-doc=no) would succeed faster. The CI script structure was inspired by experience with zproject so further ideas can be sourced from there. After integration into common NUT, a project admin should enable Travis-Github integration in Github organization settings (and then click it for repos to monitor). A separate account is not strictly required, some projects are travis'ed with credentials of a human admin who set it up. |
Now playing with the matrix build in a separate branch (based on the one PRed) and it seems to have its benefits to quickly see if a coding/Makefile error breaks the repo:
These times might be improved by integrating the changesets (PR #353) which added support for Of course, such limited set of builds on a single OS is no replacement for the buildbots that catch a lot of portability issues (when they do work), but better than nothing for many "trivial breakage" cases. |
PR #363 adds spellchecking, cppunit (so make check runs at least something), and a travis target to run the |
Looked at automation of "indent" to ensure a consistent C coding style (as requested by dev docs), but at least in a simple form (invoking the commands suggested in the doc) results are questionable: while there are certain improvements, not all changes are acceptable or conform to the documented guidelines. While So I left the |
Beside of our historic Buildbot, Github provides 3rd party integration with some apps, including Travis.
Some tests that can be part of the scope:
A potential point could be to create a dedicated CI user for automated reports and bugs filling. See also http://docs.sonarqube.org/display/PLUG/GitHub+Plugin?preview=/5311422/5636115/PullRequestAnalysis.png
@clepple especially, I'd like your feedback / approval prior to moving on that topic. I don't want you to perceive this as killing buildbot and all the energy you put into it... But the present approach would share actions with the team and allows us to give more public visibility while making it easier to increase our CI scope.
The text was updated successfully, but these errors were encountered: