-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
minikube with kvm2 driver does not provide option to select storage pool #18347
Comments
What is the required XML ? |
Not sure what you mean. The 'virsh dump-xml' will create a different XML, depending on the local configuration.
Then, on my workstation laptop I have zfs on a separate NVMe disk where I store KVM, docker and lxd 'machines' (or containers). For kvm I don't have multiple pools created, just the default one, which looks like this:
However, even on my workstation laptop, minikube always puts the images for the VMs inside ~/.minikube/machines directory. |
For the kvm network, there is a place in the template like: <interface type='network'>
<source network='{{.Network}}'/>
<model type='virtio'/>
</interface> This would need some similar new template parameters? <disk type='file' device='cdrom'>
<source file='{{.ISO}}'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'> <!-- needs "type" variable -->
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='{{.DiskPath}}'/> <!-- needs "pool" and "volume" -->
<target dev='hda' bus='virtio'/>
</disk> https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms Not sure how/if the iso image gets uploaded to the storage pool, though. EDIT: add "type", exclude cdrom |
Are you referring to this xml template:
In one of my VMs (which I created with virt-install, not specifying the storage pool, as the default one is zfs one), here is how looks like:
The name of the VM is 'jam-intune', and virt-install automatically created the nvme-zfs/kvm/jam-intune volume. From the template I linked above it seems that minikube will always create the 'file' storage type with raw files. |
Supporting source=dev directly is yet another requirement (to pool), but it is the same template involved.
So it would need one attribute for "type", and three new ones for pool and volume and dev. And use "if". |
Yup, I understand now it's a bit more complicated then I figured - just using the 'default' storage pool is not going to cut it.
And then run something like But it's not straightforward :) I suppose I could have minikube prepare everything, then have a script which will stop the minikube VM, move the .raw file into the zfs pool, reconfigure xml for the VM, and fire minikube back up. |
Here is the quick'n'simple solution: Create minikube, and stop it:
Create your virsh pool volume in zfs (my default pool is zfs backed): Copy the .raw minikube image from ~/.minikube/machines/minikube into newly created zfs pool: Edit the minikube virsh xml definition: So, now my disk definition looks like this:
And then just start minikube again: So, yes - I suppose minikube could allow --disk-type/--disk-source (or something along those lines), but it would expect that the pools/volumes are already created. That'd simplify creating a multi-node minikube clusters to. |
What Happened?
My 'local lab' uses zfs storage pool as default libvirt configured storage on the machine.
However, when I use minikube to start k8s cluster locally it puts the kvm image file to ~/.minikube directory, which then creates 20GB .rawdisk file.
As there is --kvm-network option for kvm2 driver (for selecting the libvirt configured network), it would be neat to have the option to choose the storage pool where minikube would put all the images it's creating.
Attach the log file
No log file.
Operating System
Ubuntu
Driver
KVM2
The text was updated successfully, but these errors were encountered: