-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support btrfs RAID #122
Comments
should i kill your issue right away (regardless of content)? you have reputation of not responding to issues! |
What should I have added to the last reported issues? They were both enhancement ideas, where you've indicated you wouldn't want to work on or consider them overkill. So one can either keep such ideas, if considered good, for the records, maybe someone comes along will pick them up, or close them, which makes them probably forgotten. Admittedly, I oversaw your further comments to #6. Sorry for that. |
Well I'm sorry that you have gotten so angry about this. You seemed rather not positive about these ideas so I didn't thought that much further chit chat comments would be welcomed. Anyway,... I wouldn't know much what to to right now... it seems you didn't want to change the stuff with the hardening ideas,... the stuff from #112 would probably require quite some deeper changes to check_raid and you've objected these already... and as for btrfs, I've already said that right now I wouldn't know myself how to properly do it. Also I think it's a bit unfair to claim I'd have been generally not-responsive and wouldn't have tried to help. Nevertheless... feel free to close any of these tickets |
Hey.
Now that btrfs gets more and more mainstream and it's RAID0/1/10 is probably more or less production ready and its RAID5/6 approaching that point sooner or later… it would be nice if support for checking that could be added; just like MD RAID is supported.
Unfortunately I cannot really tell what exactly to check (the documentation of btrfs is still… well… not as it should be).
First, one would probably need to look for any mounted btrfs filesystems, which probably works best via /proc/mounts and may e.g. look like:
/dev/sda2 / btrfs rw,noatime,space_cache,subvolid=258,subvol=/root 0 0
A btrfs may consist of multiple devices and may have multiple subvolumes, so the abve /dev/sda2 isn't necessarily the only device, but rather just the one that was the "start" for mounting (the above actually has another /dev/sdb2).
That shouln't be our concern, though, as the btrfsprogs handle this already.
With the subvolumes it's such a thing:
a) The same or different subvolumes of a filesystem can be mounted multiple times, so there would be "duplicate" entries in /proc/mounts
b) Even though it's not possible right now (AFAIC), it might be possible in the future that different subvols use different RAID levels (or no RAID)... so that needs to be kept in mind. In other words one would really need to use the btrfsprogs to check a filesystem whether anything of it has some redundancy.
Even more:
c) The RAID level may be converted. E.g. a disk is added, but all old data will still be non-redundant until the fs is balanced.
So we'd basically need to check whether all this conversion had been done already.
And similar, when a faulty device has been replaced and a rebuild happens.
Cheers,
Chris.
The text was updated successfully, but these errors were encountered: