virtme-ng-demo.mp4
virtme-ng is a tool that allows to easily and quickly recompile and test a Linux kernel, starting from the source code.
It allows recompiling the kernel in few minutes (rather than hours), then the kernel is automatically started in a virtualized environment that is an exact copy-on-write copy of your live system, which means that any changes made to the virtualized environment do not affect the host system.
In order to do this a minimal config is produced (with the bare minimum support to test the kernel inside qemu), then the selected kernel is automatically built and started inside qemu, using the filesystem of the host as a copy-on-write snapshot.
This means that you can safely destroy the entire filesystem, crash the kernel, etc. without affecting the host.
Kernels produced with virtme-ng are lacking lots of features, in order to reduce the build time to the minimum and still provide you a usable kernel capable of running your tests and experiments.
virtme-ng is based on virtme, written by Andy Lutomirski luto@kernel.org (web | git).
$ uname -r
5.19.0-23-generic
$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
$ cd linux
$ vng --build --commit v6.2-rc4
...
$ vng
_ _
__ _(_)_ __| |_ _ __ ___ ___ _ __ __ _
\ \ / / | __| __| _ _ \ / _ \_____| _ \ / _ |
\ V /| | | | |_| | | | | | __/_____| | | | (_| |
\_/ |_|_| \__|_| |_| |_|\___| |_| |_|\__ |
|___/
kernel version: 6.2.0-rc4-virtme x86_64
$ uname -r
6.2.0-rc4-virtme
^
|___ Now you have a shell inside a virtualized copy of your entire system,
that is running the new kernel! \o/
Then simply type "exit" to return back to the real system.
-
virtme-ng is packaged in most major distributions:
Note: it might not be the latest stable version containing new features and bug-fixes. Do not hesitate to help with the packaging!
-
The latest stable version is also published on pypi:
$ pip install virtme-ng
You will need to install the dependences manually, see the Requirements section.
-
Install from source
To install virtme-ng from source, you can clone this git repository and build a standalone virtme-ng running the following commands:
$ git clone https://github.com/arighi/virtme-ng.git $ BUILD_VIRTME_NG_INIT=1 pip3 install .
There are some extra dependences on top of the ones mentioned in the Requirements section. If you are on Debian/Ubuntu, you may need to install the following packages to build virtme-ng from source properly:
$ sudo apt install python3-pip flake8 pylint cargo rustc qemu-system-x86
If you'd prefer to use
uv
:$ BUILD_VIRTME_NG_INIT=1 uv tool install .
-
Run from source
You can also run virtme-ng directly from source, make sure you have all the requirements installed (optionally you can build
virtme-ng-init
for a faster boot, by runningmake
), then from the source directory simply run any virtme-ng command, such as:$ ./vng --help
-
You need Python 3.8 or higher
-
QEMU 1.6 or higher is recommended (QEMU 1.4 and 1.5 are partially supported using a rather ugly kludge)
- You will have a much better experience if KVM is enabled. That means that you should be on bare metal with hardware virtualization (VT-x or SVM) enabled or in a VM that supports nested virtualization. On some Linux distributions, you may need to be a member of the "kvm" group. Using VirtualBox or most VPS providers will fall back to emulation. If you are using GitHub Actions, KVM support is supported on "larger Linux runners" -- which is now the default runner -- but it has to be manually enabled, see how it is used in our tests or here with Docker.
-
Depending on the options you use, you may need a statically linked
busybox
binary somewhere in your path. -
Optionally, you may need virtiofsd 1.7.0 (or higher) for better filesystem performance inside the virtme-ng guests.
-
Optionally, you may need
socat
for the--console
and--console-client
options, and the host's kernel should support VSOCK (CONFIG_VHOST_VSOCK
). -
Optionally, you may need
sshd
installed for the--ssh
and--ssh-client
options. -
Optionally, if the shell completion is not available (e.g. when installed from pip or from source), you can install
shtab
and run:# Bash $ mkdir -p ~/.local/share/bash-completion/completions/ $ shtab --shell=bash -u virtme_ng.run.make_parser > ~/.local/share/bash-completion/completions/vng # ZSH $ shtab --shell=zsh -u virtme_ng.run.make_parser | sudo tee /usr/local/share/zsh/site-functions/_vng >/dev/null
-
You may customize the default configuration by providing one of the following, by order of preference:
$HOME/.config/virtme-ng/virtme-ng.conf
,$HOME/.virtme-ng.conf
or/etc/virtme-ng.conf
. As a fallback for any missing values, the default ones will be used. -
The format of the file is JSON. Default values:
{ "default_opts": {}, "systemd": { "masks": ["getty@"] } }
-
Build a kernel from a clean local kernel source directory (if a
.config
is not available virtme-ng will automatically create a minimum.config
with all the required feature to boot the instance):$ vng -b
-
Build tag v6.1-rc3 from a local kernel git repository:
$ vng -b -c v6.1-rc3
-
Generate a minimal kernel
.config
in the current kernel build directory:$ vng --kconfig
-
Run a kernel previously compiled from a local git repository in the current working directory:
$ vng
-
Run an interactive virtme-ng session using the same kernel as the host:
$ vng -r
-
Test installed kernel 6.2.0-21-generic kernel (NOTE: /boot/vmlinuz-6.2.0-21-generic needs to be accessible):
$ vng -r 6.2.0-21-generic
-
Run a pre-compiled vanilla v6.6 kernel fetched from the Ubuntu mainline builds repository (useful to test a specific kernel version directly and save a lot of build time):
$ vng -r v6.6
-
Download and test kernel 6.2.0-1003-lowlatency from deb packages:
$ mkdir test $ cd test $ apt download linux-image-6.2.0-1003-lowlatency linux-modules-6.2.0-1003-lowlatency $ for d in *.deb; do dpkg -x $d .; done $ vng -r ./boot/vmlinuz-6.2.0-1003-lowlatency
-
Build the tip of the latest kernel on a remote build host called "builder", running make inside a specific build chroot (managed remotely by schroot):
$ vng --build --build-host builder \ --build-host-exec-prefix "schroot -c chroot:kinetic-amd64 -- "
-
Run the previously compiled kernel from the current working directory and enable networking:
$ vng --net user
-
Run the previously compiled kernel adding an additional virtio-scsi device:
$ qemu-img create -f qcow2 /tmp/disk.img 8G $ vng --disk /tmp/disk.img
-
Recompile the kernel passing some env variables to enable Rust support (using specific versions of the Rust toolchain binaries):
$ vng --build RUSTC=rustc-1.62 BINDGEN=bindgen-0.56 RUSTFMT=rustfmt-1.62
-
Build the arm64 kernel (using a separate chroot in /opt/chroot/arm64 as the main filesystem):
$ vng --build --arch arm64 --root /opt/chroot/arm64/
-
Build the kernel using a separate build directory, and run it, in verbose:
$ export KBUILD_OUTPUT=.virtme/build $ vng --build --verbose $ vng --verbose
-
Same example, but using
O=
:$ vng --build --verbose -- O=.virtme/build $ vng --verbose -- O=.virtme/build
-
Accelerate the kernel rebuilds using CCache (if installed):
$ PATH="/usr/lib/ccache:${PATH}" \ KBUILD_BUILD_TIMESTAMP=0 \ vng --build # or export the two variables before, see 'man ccache' for more details
-
Execute
uname -r
inside a kernel recompiled in the current directory and send the output to cowsay on the host:$ vng -- uname -r | cowsay __________________ < 6.1.0-rc6-virtme > ------------------ \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || ||
-
Run a bunch of parallel virtme-ng instances in a pipeline, with different kernels installed in the system, passing each other their stdout/stdin and return all the generated output back to the host (also measure the total elapsed time):
$ time true | \ > vng -r 5.19.0-38-generic -e "cat && uname -r" | \ > vng -r 6.2.0-19-generic -e "cat && uname -r" | \ > vng -r 6.2.0-20-generic -e "cat && uname -r" | \ > vng -r 6.3.0-2-generic -e "cat && uname -r" | \ > cowsay -n ___________________ / 5.19.0-38-generic \ | 6.2.0-19-generic | | 6.2.0-20-generic | \ 6.3.0-2-generic / ------------------- \ ^__^ \ (oo)\_______ (__)\ )\/\ ||----w | || || real 0m2.737s user 0m8.425s sys 0m8.806s
-
Run the vanilla v6.7-rc5 kernel with an Ubuntu 22.04 rootfs:
$ vng -r v6.7-rc5 --user root --root ./rootfs/22.04 --root-release jammy -- cat /etc/lsb-release /proc/version ... DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS" Linux version 6.7.0-060700rc5-generic (kernel@kathleen) (x86_64-linux-gnu-gcc-13 (Ubuntu 13.2.0-7ubuntu1) 13.2.0, GNU ld (GNU Binutils for Ubuntu) 2.41) #202312102332 SMP PREEMPT_DYNAMIC Sun Dec 10 23:41:31 UTC 2023
-
Run with systemd as init:
$ sudo vng -r --systemd --exec "systemctl status | head" ● virtme-ng State: starting Units: 392 loaded (incl. loaded aliases) Jobs: 4 queued Failed: 3 units Since: Mon 2025-05-26 11:00:47 -03; 4s ago systemd: 257.5+suse.8.gc10a66fb4d Tainted: unmerged-bin CGroup: / ├─init.scope
-
Run with systemd as init in an external rootfs:
$ vng -r --systemd --user root --root ./rootfs/sid --exec "systemctl status | head" ● virtme-ng State: degraded Units: 273 loaded (incl. loaded aliases) Jobs: 0 queued Failed: 4 units Since: Mon 2025-05-26 14:01:06 UTC; 2s ago systemd: 257.5-2 Tainted: unmerged-bin CGroup: / ├─init.scope
-
Run the current kernel creating a 1GB NUMA node with CPUs 0,1,3 assigned and a 3GB NUMA node with CPUs 2,4,5,6,7 assigned:
$ vng -r -m 4G --numa 1G,cpus=0-1,cpus=3 --numa 3G,cpus=2,cpus=4-7 -- numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 3 node 0 size: 1005 MB node 0 free: 914 MB node 1 cpus: 2 4 5 6 7 node 1 size: 2916 MB node 1 free: 2797 MB node distances: node 0 1 0: 10 20 1: 20 10
-
Run the current kernel creating 4 NUMA nodes of 1GB each and assign different distance costs between the NUMA nodes to simulate non-uniform memory access:
$ vng -r --cpu 8 -m 4G \ > --numa 1G,cpus=0-1 --numa 1G,cpus=2-3 \ > --numa 1G,cpus=4-5 --numa 1G,cpus=6-7 \ > --numa-distance 0,1=51 --numa-distance 0,2=31 --numa-distance 0,3=41 \ > --numa-distance 1,2=21 --numa-distance 1,3=61 \ > --numa-distance 2,3=11 -- numactl -H available: 4 nodes (0-3) node 0 cpus: 0 1 node 0 size: 1006 MB node 0 free: 974 MB node 1 cpus: 2 3 node 1 size: 953 MB node 1 free: 919 MB node 2 cpus: 4 5 node 2 size: 943 MB node 2 free: 894 MB node 3 cpus: 6 7 node 3 size: 1006 MB node 3 free: 965 MB node distances: node 0 1 2 3 0: 10 51 31 41 1: 51 10 21 61 2: 31 21 10 11 3: 41 61 11 10
-
Run
glxgears
inside a kernel recompiled in the current directory:$ vng -g -- glxgears (virtme-ng is started in graphical mode)
-
Execute an
awesome
window manager session with kernel 6.2.0-1003-lowlatency (installed in the system):$ vng -r 6.2.0-1003-lowlatency -g -- awesome (virtme-ng is started in graphical mode)
-
Run the
steam
snap (tested in Ubuntu) inside a virtme-ng instance using the 6.2.0-1003-lowlatency kernel:$ vng -r 6.2.0-1003-lowlatency --snaps --net user -g -- /snap/bin/steam (virtme-ng is started in graphical mode)
-
Generate a memory dump of a running instance and read 'jiffies' from the memory dump using the
drgn
debugger:# Start the vng instance in debug mode $ vng --debug # In a separate shell session trigger the memory dump to /tmp/vmcore.img $ vng --dump /tmp/vmcore.img # Use drgn to read 'jiffies' from the memory dump: $ echo "print(prog['jiffies'])" | drgn -q -s vmlinux -c /tmp/vmcore.img drgn 0.0.23 (using Python 3.11.6, elfutils 0.189, with libkdumpfile) For help, type help(drgn). >>> import drgn >>> from drgn import NULL, Object, cast, container_of, execscript, offsetof, reinterpret, sizeof >>> from drgn.helpers.common import * >>> from drgn.helpers.linux import * >>> (volatile unsigned long)4294675464
-
Attach a GDB session to a running instance started with
--debug
:# Start the vng instance in debug mode $ vng --debug # In a separate terminal run the following command to attach the gdb session: $ vng --gdb kernel version = 6.9.0-virtme Reading symbols from vmlinux... Remote debugging using localhost:1234 native_irq_disable () at ./arch/x86/include/asm/irqflags.h:37 37 asm volatile("cli": : :"memory"); (gdb) # NOTE: a vmlinux must be present in the current working directory in order # to resolve symbols, otherwise vng # will automatically search for a # vmlinux available in the system.
-
Connect to a simple remote shell (
socat
is required, VSOCK will be used):# Start the vng instance with server support: $ vng --console # In a separate terminal run the following command to connect to a remote shell: $ vng --console-client
-
Enable SSH in the vng guest:
# Start the vng instance with ssh server support: $ vng --ssh # Connect to the vng guest from the host via ssh: $ vng --ssh-client
-
Generate some results inside the vng guest and copy them back to the host using SCP:
# Start the vng instance with SSH server support: arighi@host~> vng --ssh ... arighi@virtme-ng~> ./run.sh > result.txt # In another terminal, copy result.txt from the guest to the host using scp: arighi@host~> scp -F ~/.cache/virtme-ng/.ssh/virtme-ng-ssh.conf virtme-ng%2222:~/result.txt . # The SSH command can be printed using this command, and easily adapted later: arighi@host~> vng --ssh-client --dry-run ssh -F /home/arighi/.cache/virtme-ng/.ssh/virtme-ng-ssh.conf virtme-ng%2222 # With systemd >= 256, it is possible to use the 'vsock/<CID>' hostname directly: arighi@host~> ssh vsock/2222 arighi@virtme-ng~>
-
Run virtme-ng inside a docker container:
$ docker run -it --privileged ubuntu:23.10 /bin/bash # apt update # echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections # apt install --yes git qemu-kvm udev iproute2 busybox-static \ coreutils python3-requests python3-argcomplete libvirt-clients kbd kmod file rsync zstd virtiofsd # git clone --recursive https://github.com/arighi/virtme-ng.git # ./virtme-ng/vng -r v6.6 -- uname -r 6.6.0-060600-generic
See also:
.github/workflows/run.yml
as a practical example on how to use virtme-ng inside docker. -
Run virtme-ng with GPU passthrough:
# Confirm host kernel has VFIO and IOMMU support # Check if NVIDIA module is installed on the host $ modinfo nvidia # If the nvidia module is installed, blacklist the nvidia modules $ sudo bash -c 'echo -e "blacklist nvidia\nblacklist nvidia-drm\nblacklist nvidia-modeset\nblacklist nvidia-peermem\nblacklist nvidia-uvm" > /etc/modprobe.d/blacklist-nvidia.conf' # Host will need to be rebooted for blacklist to take effect. # Get GPU device ID $ lspci -nn | grep NVIDIA 0000:01:00.0 VGA compatible controller [0300]: NVIDIA Corporation AD104GLM [RTX 3500 Ada Generation Laptop GPU] [10de:27bb] (rev a1) 0000:01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:22bc] (rev a1)) # Configure VFIO for device passthrough $ sudo bash -c 'options vfio-pci ids=10de:27bb,10de:22bc' > /etc/modprobe.d/vfio.conf # Load VFIO module $ sudo modprobe vfio-pci # Pass PCI address to virtme-ng $ sudo vng --nvgpu "01:00.0" -r linux
virtme-ng allows to automatically configure, build and run kernels using the
main command-line interface called vng
.
A minimal custom .config
is automatically generated if not already present
when --build
is specified.
It is possible to specify a set of custom configs (.config
chunk) in
~/.config/virtme-ng/kernel.config
, or using --config
chunk-file's
or --configitem CONFIG_FOO=bar
's. These user-specific settings will
successively override the default settings. The final overrides are
the mandatory config items that are required to boot and test the
kernel inside QEmu, using virtme-run
.
Then the kernel is compiled either locally or on an external build host (if the
--build-host
option is used); once the build is done only the required files
needed to test the kernel are copied from the remote host if an external build
host is used.
When a remote build host is used (--build-host
) the target branch is force
pushed to the remote host inside the ~/.virtme
directory.
Then the kernel is executed using the virtme module. This allows to test the kernel using a safe copy-on-write snapshot of the entire host filesystem.
All the kernels compiled with virtme-ng have a -virtme
suffix to their kernel
version, this allows to easily determine if you're inside a virtme-ng kernel or
if you're using the real host kernel (simply by checking uname -r
).
It is possible to recompile and test out-of-tree kernel modules inside the virtme-ng kernel, simply by building them against the local directory of the kernel git repository that was used to build and run the kernel.
Typically, if you always use virtme-ng with an external build server (e.g.,
vng --build --build-host REMOTE_SERVER --build-host-exec-prefix CMD
) you
don't always want to specify these options, so instead, you can simply define
them in your configuration file (refer to the Configuration
section) under default_opts
and then simply run vng --build
.
Example (always use an external build server called kathleen
and run make
inside a build chroot called chroot:lunar-amd64
). To do so, add the
default_opts
section in your configuration file as following:
{
"default_opts": {
"build_host": "kathleen",
"build_host_exec_prefix": "schroot -c chroot:lunar-amd64 --"
},
}
Now you can simply run vng --build
to build your kernel from the current
working directory using the external build host, prepending the exec prefix
command when running make.
-
If you get permission denied when starting qemu, make sure that your username is assigned to the group
kvm
orlibvirt
:$ groups | grep "kvm\|libvirt"
-
When using
--network bridge
to create a bridged network in the guest you may get the following error:... failed to create tun device: Operation not permitted
This is because
qemu-bridge-helper
requiresCAP_NET_ADMIN
permissions.To fix this you need to add
allow all
to/etc/qemu/bridge.conf
and set theCAP_NET_ADMIN
capability toqemu-bridge-helper
, as following:$ sudo filecap /usr/lib/qemu/qemu-bridge-helper net_admin
-
If the guest fails to start because the host doesn't have enough memory available you can specify a different amount of memory using
--memory MB
, (this option is passed directly to qemu via-m
, default is 1G). -
If you're testing a kernel for an architecture different from the host, keep in mind that you need to use also
--root DIR
to use a specific chroot with the binaries compatible with the architecture that you're testing.If the chroot doesn't exist in your system virtme-ng will automatically create it using the latest daily build Ubuntu cloud image:
$ vng --build --arch riscv64 --root ./tmproot
-
If the build on a remote build host is failing unexpectedly you may want to try cleaning up the remote git repository, running:
$ vng --clean --build-host HOSTNAME
-
Snap support is still experimental and something may not work as expected (keep in mind that, by default, virtme-ng will try to run
snapd
in a bare minimum system environment without systemd), if some snaps are not running try to disableapparmor
, adding--append="apparmor=0"
to the virtme-ng command line. -
Systemd support (
--systemd
) is still experimental. If something does not work for you, try masking the unit that is freezing, e.g.--append "systemd.mask=$PROBLEMATIC_UNIT"
(refer to the Configuration section for a more permanent setup). Be aware that you might also need--user root
, or if you're using your own/
as ROOTFS, you may need to run vng itself as root. -
Running virtme-ng instances inside docker: in case of failures/issues, especially with stdin/stdout/stderr redirections, make sure that you have
udev
installed in your docker image and run the following command before usingvng
:$ udevadm trigger --subsystem-match --action=change
-
To mount the legacy CGroup filesystem (v1) layout, add
SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1
to the kernel boot options:$ vng -r --append "SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1" -- 'df -T /sys/fs/cgroup/*' Filesystem Type 1K-blocks Used Available Use% Mounted on blkio cgroup 0 0 0 - /sys/fs/cgroup/blkio cpu cgroup 0 0 0 - /sys/fs/cgroup/cpu cpuacct cgroup 0 0 0 - /sys/fs/cgroup/cpuacct devices cgroup 0 0 0 - /sys/fs/cgroup/devices memory cgroup 0 0 0 - /sys/fs/cgroup/memory pids cgroup 0 0 0 - /sys/fs/cgroup/pids
Please see DCO-1.1.txt.
virtme-ng uses pre-commit to perform some checks, e.g. code formatting and linting. Therefore it is recommended to set up pre-commit for development:
$ cd "$VIRTME_NG"
$ # Activate pre-commit hooks for virtme-ng
$ pre-commit install
pre-commit installed at .git/hooks/pre-commit
- LWN: Faster kernel testing with virtme-ng (November, 2023)
- LPC 2023: Speeding up Kernel Testing and Debugging with virtme-ng
- Kernel Recipes 2024: virtme-ng
- Linux Foundation Mentorship Session: Speeding Up Kernel Development With virtme-ng
virtme-ng is written by Andrea Righi arighi@nvidia.com
virtme-ng is based on virtme, written by Andy Lutomirski luto@kernel.org (web | git).