Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

removal of md pv fails when replacing w/ non-raid lvm pool #164

Open
dwlehman opened this issue Sep 14, 2020 · 1 comment
Open

removal of md pv fails when replacing w/ non-raid lvm pool #164

dwlehman opened this issue Sep 14, 2020 · 1 comment

Comments

@dwlehman
Copy link
Collaborator

It appears to be due to a failure to stop the array before wiping members.

@AbdelAzizMohamedMousa
Copy link

When removing an MD (software RAID) PV and replacing it with a non-RAID LVM pool, it is important to stop the array before wiping the members. If the array is not stopped, the wipe operation will fail and the PV cannot be removed.

Here is an example of how to properly remove an MD PV and replace it with a non-RAID LVM pool:

Stop the array:

bash
sudo mdadm --stop /dev/md0
Remove the array:

bash
sudo mdadm --remove /dev/md0
Wipe the members:

python
sudo wipefs --all /dev/sda1
sudo wipefs --all /dev/sdb1
Remove the PV from the volume group:

bash
sudo vgreduce myvg /dev/md0
Remove the physical volume:

bash

sudo pvremove /dev/md0
Create the new LVM pool:

sudo lvcreate -L 10G -n mypool myvg
This creates a new logical volume named mypool in the myvg volume group with a size of 10GB.

Format the new logical volume with a filesystem of your choice:

bash

sudo mkfs.ext4 /dev/myvg/mypool
Mount the new logical volume:

bash
sudo mount /dev/myvg/mypool /mnt/mypool
This mounts the new logical volume at /mnt/mypool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants