Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube with kvm2 driver does not provide option to select storage pool #18347

Open
msplival opened this issue Mar 10, 2024 · 7 comments
Open
Labels
co/kvm2-driver KVM2 driver related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@msplival
Copy link

msplival commented Mar 10, 2024

What Happened?

My 'local lab' uses zfs storage pool as default libvirt configured storage on the machine.
However, when I use minikube to start k8s cluster locally it puts the kvm image file to ~/.minikube directory, which then creates 20GB .rawdisk file.

As there is --kvm-network option for kvm2 driver (for selecting the libvirt configured network), it would be neat to have the option to choose the storage pool where minikube would put all the images it's creating.

Attach the log file

No log file.

Operating System

Ubuntu

Driver

KVM2

@afbjorklund afbjorklund added co/kvm2-driver KVM2 driver related issues kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Mar 17, 2024
@afbjorklund
Copy link
Collaborator

What is the required XML ?

@afbjorklund afbjorklund added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Mar 17, 2024
@msplival
Copy link
Author

msplival commented Mar 17, 2024

What is the required XML ?

Not sure what you mean. The 'virsh dump-xml' will create a different XML, depending on the local configuration.
For instance, on my home box I have a 'nvme' pool, which is just an ext4 directory mounted on nvme:

<pool type='dir'>
  <name>nvme</name>
  <uuid>46a6e47e-a175-4b52-96ed-cd7e8ca1ad60</uuid>
  <capacity unit='bytes'>943940464640</capacity>
  <allocation unit='bytes'>465904889856</allocation>
  <available unit='bytes'>478035574784</available>
  <source>
  </source>
  <target>
    <path>/srv/nvme/libvirt</path>
    <permissions>
      <mode>0755</mode>
      <owner>64055</owner>
      <group>133</group>
    </permissions>
  </target>
</pool>

Then, on my workstation laptop I have zfs on a separate NVMe disk where I store KVM, docker and lxd 'machines' (or containers). For kvm I don't have multiple pools created, just the default one, which looks like this:

<pool type='zfs'>
  <name>default</name>
  <uuid>55469119-f76b-432c-a60f-e21871fd9d7a</uuid>
  <capacity unit='bytes'>498216206336</capacity>
  <allocation unit='bytes'>185968124416</allocation>
  <available unit='bytes'>312248081920</available>
  <source>
    <name>nvme-zfs/kvm</name>
  </source>
  <target>
    <path>/dev/zvol/nvme-zfs/kvm</path>
  </target>
</pool>

However, even on my workstation laptop, minikube always puts the images for the VMs inside ~/.minikube/machines directory.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 17, 2024

Not sure what you mean

For the kvm network, there is a place in the template like:

    <interface type='network'>
      <source network='{{.Network}}'/>
      <model type='virtio'/>
    </interface>

This would need some similar new template parameters?

    <disk type='file' device='cdrom'>
      <source file='{{.ISO}}'/>
      <target dev='hdc' bus='scsi'/>
      <readonly/>
    </disk>
   <disk type='file' device='disk'> <!-- needs "type" variable -->
      <driver name='qemu' type='raw' cache='default' io='threads' />
      <source file='{{.DiskPath}}'/> <!-- needs "pool" and "volume" -->
      <target dev='hda' bus='virtio'/>
    </disk>

https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms

Not sure how/if the iso image gets uploaded to the storage pool, though.

EDIT: add "type", exclude cdrom

@msplival
Copy link
Author

msplival commented Mar 17, 2024

Are you referring to this xml template:

?

In one of my VMs (which I created with virt-install, not specifying the storage pool, as the default one is zfs one), here is how looks like:

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source dev='/dev/zvol/nvme-zfs/kvm/jam-intune'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>

The name of the VM is 'jam-intune', and virt-install automatically created the nvme-zfs/kvm/jam-intune volume.
I suppose that was the default storage pool was of 'dir' type then the would look differently.

From the template I linked above it seems that minikube will always create the 'file' storage type with raw files.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 18, 2024

Supporting source=dev directly is yet another requirement (to pool), but it is the same template involved.

file
The file attribute specifies the fully-qualified path to the file holding the disk.

block
The dev attribute specifies the fully-qualified path to the host device to serve as the disk.

volume
The underlying disk source is represented by attributes pool and volume.

So it would need one attribute for "type", and three new ones for pool and volume and dev. And use "if".

@msplival
Copy link
Author

Yup, I understand now it's a bit more complicated then I figured - just using the 'default' storage pool is not going to cut it.
Because, when I use virt-install, or just 'click' trough virt-manager, I don't have to specify anything, the ZFS pool is automagically created, and so on.
Here, one would need to create a zfs pool itself and then instruct minicube to use that specific pool. Something like:

virsh vol-create-as default minikube 20G --allocation 10G

And then run something like minikube with --disk-type block --disk-source dev,/wherever/zfs/pool/volume/is

But it's not straightforward :)

I suppose I could have minikube prepare everything, then have a script which will stop the minikube VM, move the .raw file into the zfs pool, reconfigure xml for the VM, and fire minikube back up.

@msplival
Copy link
Author

msplival commented Apr 1, 2024

Here is the quick'n'simple solution:

Create minikube, and stop it:

minikube start
minikube stop

Create your virsh pool volume in zfs (my default pool is zfs backed):
virsh vol-create-as --pool default --name minikube --capacity 20G

Copy the .raw minikube image from ~/.minikube/machines/minikube into newly created zfs pool:
sudo dd if=~/.minikube/machines/minikube/minikube.rawdisk of=/dev/zvol/nvme-zfs/kvm/minikube bs=1M status=progress

Edit the minikube virsh xml definition: virsh edit minikube.
Change the hda disk definition so that for driver source it mentions your zfs pool:
<source dev='/dev/zvol/nvme-zfs/kvm/minikube'/>

So, now my disk definition looks like this:

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
      <source dev='/dev/zvol/nvme-zfs/kvm/minikube'/>
      <target dev='hda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

And then just start minikube again: minikube start.

So, yes - I suppose minikube could allow --disk-type/--disk-source (or something along those lines), but it would expect that the pools/volumes are already created.

That'd simplify creating a multi-node minikube clusters to.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kvm2-driver KVM2 driver related issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

2 participants