-
-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CI: Use VMs provided by DigitalOcean FOSS support effort, and document the lessons learned #2192
Comments
Posted a nut-website update for:
Website re-rendition pending... |
…networkupstools/nut#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…zations and projects which help ensure NUT project operations on a continuous basis [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…urce.nyc3.cdn.digitaloceanspaces.com/attribution/assets/PoweredByDO/DO_Powered_by_Badge_blue.svg [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…ools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…ools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…zations and projects which help ensure NUT project operations on a continuous basis [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…urce.nyc3.cdn.digitaloceanspaces.com/attribution/assets/PoweredByDO/DO_Powered_by_Badge_blue.svg [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…ools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…, this helps with sponsorships [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…NUT CI and Ops" table with vendor logos [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…NUT CI and Ops" table with vendor logos [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…NUT CI and Ops" table with vendor logos [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…NUT CI and Ops" table with vendor logos [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
Thanks to suggestions in offline parts of this discussion, several text resources of the NUT project should now (or soon) suggest that users/contributors "star" it on GitHub as a metric useful for sponsor consideration, including:
|
…RL [#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
…ier in badge URL [networkupstools/nut#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
Updated DO URLs with referral campaign ID which gives bonus credits both to new DO users and to NUT CI farm account, if all goes well. |
For the purposes of eventually making an article on this setup, can as well start here... According to the fine print in the scary official docs, DigitalOcean VMs can only use "custom images" in one of a number of virtual HDD formats, which carry an ext3/ext4 filesystem for DO add-ons to barge into for management. In practice, uploading an OpenIndiana Hipster "cloud" image, also by providing an URL to an image file on the Internet (see above for some collections) sort of worked (status remains "pending" but a VM could be made with it); however following up with an OmniOS image failed (exceeded some limit) - I suppose, after ending the setups with one custom image, it can be nuked and then another used in its place. UPDATE: You have to wait a surprisingly long time, some 15-20 minutes, and additional images suddenly become "Uploaded". The OI image could be loaded... but that's it - the logo is visible on the DO Rescue Console, as well as some early boot-loader lines ending with a list of supported consoles. I assume it went into the It probably booted, since I could see an The VM can however be rebooted with a (DO-provided) Rescue ISO, based on Ubuntu 18.04 LTS with ZFS support - which is sufficient to send over the existing VM contents from original OI VM on Fosshost. The rescue live image allows to install APT packages, such as Note that if your client system uses SSH keys can be imported with a helper:
Make the rescue userland convenient:
I can import the cloud-OI ZFS pool into the Linux Rescue CD session:
A kernel core-dump area is missing, compared to the original VM... adding per best practice:
To receive ZFS streams from the running OI into the freshly prepared cloud-OI image, it wanted the ZFS features to be enabled (all disabled by default) since some are used in the replication stream:
On the original VM, snapshot all datasets recursively so whole data trees can be easily sent over (note that we then remove some snaps like for swap/dump areas which otherwise waste a lot of space over time with blocks of obsolete swap data held back):
On the receiving VM, move existing
Send over the data (from the prepared
With sufficiently large machines and slow source hosting, expect some hours for the transfer (I saw 4-8Mb/s in the streaming phase for large increments, and quite a bit of quiet time for enumeration of almost-empty regular snapshots - work with ZFS metadata has a cost). Note that one of the benefits of ZFS (and the non-automatic snapshots used here) is that it is easy to catch-up later to send the data which the original server would generate and write during the replication. You can keep it working until the last minutes of the migration. |
OI TODO (after the transfers complete):
|
WARNING: Per https://www.illumos.org/issues/14526 and personal and community practice, it seems that "slow reboot" for illumos VMs on QEMU-6.x (and DigitalOcean) misbehaves and hangs, the virtual hardware is not power-cycled. A power-off/on cycle through UI (and probably REST API) does work. Other kernels are not impacted, it seems. Wondering if there are QEMU HW watchdogs on DO... UPDATE: It took about 2 hours for |
The metadata-agent seems buildable and installable, logged the SSH keys on console after service manifest import. |
As of this writing, the NUT CI Jenkins controller runs on DigitalOcean - and feels a lot snappier in browsing and SSH management. The older Fosshost VMs are alive and used as its build agents (just the container with the old production Jenkins controller is not auto-booting anymore); with holidays abound it may take time to have them replicated onto DO. The Jenkins SSH Build Agent setups involved here were copied on the controller (as XML files) and updated to tap into the different "host" and "port" (so that the original definitions can in time be used for replicas on DO), and due to trust settings - the Similarly, existing Jenkins swarm agents from community PCs had to be taught the new DNS name (some had it in /etc/hosts) but otherwise connected OK. |
Another limitation seen with "custom images" is that IPv6 is not offered to those VMs. Generally all VMs get random (hopefully persistent) public IPv4 addresses from various subnets; it is possible to also request an interconnect VLAN for one's VMs co-located in same data center and have it attached (with virtual IP addresses) to another Another note regards pricing: resources that "exist" are billed, whether they run or not (e.g. turned-off VMs still reserve CPU/RAM to be able to run on demand, dormant storage for custom images is used even if they are not active filesystems, etc.). The hourly prices are for resources spawned and destroyed within a month. After a monthly-rate total price for the item is reached, it is applied instead. |
Spinning up the Debian-based Linux builder (with many containers for various Linux systems) with ZFS, to be consistent across the board, was an adventure.
|
README.adoc: use DigitalOcean referral campaign identifier in badge URL [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: deprecate "onlinedischarge" in favor of "onlinedischarge_onbattery" option name [networkupstools#2213] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: introduce "onlinedischarge_log_throttle_sec" setting and/or throttling by changes of battery.charge [networkupstools#2214] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: introduce "onlinedischarge_log_throttle_hovercharge" setting [networkupstools#2215] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> Signed-off-by: mjastrzebski <mjastrzebski@ibb.waw.pl>
README.adoc: use DigitalOcean referral campaign identifier in badge URL [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: deprecate "onlinedischarge" in favor of "onlinedischarge_onbattery" option name [networkupstools#2213] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: introduce "onlinedischarge_log_throttle_sec" setting and/or throttling by changes of battery.charge [networkupstools#2214] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> drivers/usbhid-ups.c, docs/man/usbhid-ups.txt, NEWS.adoc: introduce "onlinedischarge_log_throttle_hovercharge" setting [networkupstools#2215] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com> Signed-off-by: mjastrzebski <mjastrzebski@ibb.waw.pl> Signed-off-by: mjastrzebski <jasmati09@gmail.com>
One more potential caveat: while DigitalOcean provides VPC network segments for free intercomms of a group of droplets, it assigns IP addresses to those and does not let any others be used by the guest. This causes some hassle when importing a set of VMs which used different IP addresses on the intercomm VLAN originally. |
Added replicas of more existing VMs: FreeBSD 12 (needed to use a seed image, OI did not cut it - ZFS options in its pool were too new, so the older build of the BSD loader was not too eager to find the pool) and OmniOS (relatively straightforward with the OI image). Also keep in mind that the (old version of?) FreeBSD loader rejected a |
Added a replica of OpenBSD 6.5 VM as an example of relatively dated system in the CI, which went decently well as a
...followed by a reboot and subsequent adaptation of I did not check if the DO recovery OS can mount BSD UFS partitions, it sufficed to log into the pre-configured system. One caveat was that it got installed with X11, and DO console did not pass through the mouse nor advanced keyboard shortcuts. So FWIW, |
…italOcean) [networkupstools#2192] Signed-off-by: Jim Klimov <jimklimov+nut@gmail.com>
Follow-up to #869 and #1729: since Dec 2022 NUT was accepted into DO sponsorship, although at a lowest tier (aka "testing phase") which did not allow for sufficiently "strong" machines for migration of all workloads from Fosshost VMs, until we took steps to promote the relationship in NUT media materials. This fell through the cracks a bit, due to other project endeavours - but as I was recently reminded, there is some follow-up to do on our side.
https://opensource.nyc3.cdn.digitaloceanspaces.com/attribution/index.html
https://www.digitalocean.com/open-source/credits-for-projects
UPDATE: This work-log is also referenced from https://github.com/networkupstools/nut/wiki/NUT-CI-farm Wiki page.
The text was updated successfully, but these errors were encountered: