Open
Description
On raid 5/6 command btrfs check --check-data-csum
show error:
mirror 2 bytenr NNNNNNN csum NN expected csum NN
ERROR: errors found in csum tree
for example:
# mkfs.btrfs --metadata raid10 --data raid5 /dev/sd[bcde] -f;\
mount /dev/sdb /mnt; dd if=/dev/urandom of=/mnt/zfile bs=1K count=70;\
umount /mnt; btrfs check --check-data-csum /dev/sdb;\
mount /dev/sdb /mnt; btrfs device stats /mnt; umount /mnt
btrfs-progs v5.10.1
See http://btrfs.wiki.kernel.org for more information.
Label: (null)
UUID: 7a01ce0e-63b8-4911-abcd-27eff8b921c3
Node size: 16384
Sector size: 4096
Filesystem size: 32.00GiB
Block group profiles:
Data: RAID5 3.00GiB
Metadata: RAID10 128.00MiB
System: RAID10 16.00MiB
SSD detected: no
Incompat features: extref, raid56, skinny-metadata
Runtime features:
Checksum: crc32c
Number of devices: 4
Devices:
ID SIZE PATH
1 8.00GiB /dev/sdb
2 8.00GiB /dev/sdc
3 8.00GiB /dev/sdd
4 8.00GiB /dev/sde
70+0 записей получено
70+0 записей отправлено
71680 байт (72 kB, 70 KiB) скопирован, 0,00690105 s, 10,4 MB/s
Opening filesystem to check...
Checking filesystem on /dev/sdb
UUID: 7a01ce0e-63b8-4911-abcd-27eff8b921c3
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking csums against data
mirror 2 bytenr 173015040 csum 42 expected csum 18
mirror 2 bytenr 173019136 csum 247 expected csum 60
mirror 2 bytenr 173080576 csum 42 expected csum 177
mirror 2 bytenr 173084672 csum 247 expected csum 66
ERROR: errors found in csum tree
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 1056768 bytes used, error(s) found
total csum bytes: 72
total tree bytes: 131072
total fs tree bytes: 32768
total extent tree bytes: 16384
btree space waste bytes: 122423
file data blocks allocated: 925696
referenced 925696
[/dev/sdb].write_io_errs 0
[/dev/sdb].read_io_errs 0
[/dev/sdb].flush_io_errs 0
[/dev/sdb].corruption_errs 0
[/dev/sdb].generation_errs 0
[/dev/sdc].write_io_errs 0
[/dev/sdc].read_io_errs 0
[/dev/sdc].flush_io_errs 0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sdd].write_io_errs 0
[/dev/sdd].read_io_errs 0
[/dev/sdd].flush_io_errs 0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0
[/dev/sde].write_io_errs 0
[/dev/sde].read_io_errs 0
[/dev/sde].flush_io_errs 0
[/dev/sde].corruption_errs 0
[/dev/sde].generation_errs 0
more data -> more errors (count > 60)
data profiles raid 0,1,10,single,1c3,1c4 don't have errors with "check-data-csum"
metadata profile does not matter
Checked on Linux 5.10, 5.11 (Arch Linux) and openSUSE Tumbleweed