Resize btrfs filesystem
To resize btrfs filesystem, run following command
btrfs filesystem resize max /app
To resize btrfs filesystem, run following command
btrfs filesystem resize max /app
Keep getting following error message during reboot
...a stop job is running for monitoring of lvm2 mirrors...
But the system has no lvm volume at all.
Some people said, this service is to fix bug on BTRFS snapshot.
I disabled it, because I also don't use BTRFS snapshot currently.
btrfs device states /app
btrfs fi show /app
Convert to raid0 and remove one disk
btrfs balance start -f -sconvert=single -mconvert=single -dconvert=single /app
btrfs device remove /dev/bcache0 /app
Add disk and convert to raid1
btrfs device add -f /dev/bcache0 /app
btrfs balance start -dconvert=raid1 -mconvert=raid1 /app
# btrfs fi df /app
Data, RAID1: total=2.69GiB, used=2.51GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=317.94MiB, used=239.55MiB
GlobalReserve, single: total=12.03MiB, used=0.00B
#
If contains multiple block group profiles, could happen when a profile conversion using balance filters was interrupted.
Data, RAID1: total=2.03GiB, used=1.86GiB
Data, single: total=704.00MiB, used=665.56MiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=288.00MiB, used=239.56MiB
GlobalReserve, single: total=11.94MiB, used=0.00B
WARNING: Multiple block group profiles detected, see 'man btrfs(5)'.
WARNING: Data: single, raid1
Perform rebalance again
# btrfs balance start -dconvert=raid1 -mconvert=raid1 /app
Done, had to relocate 12 out of 12 chunks
btrfs scrub start /app
btrfs scrub status /app
To correct error, first find out corrupted file, then restore from backup or delete the file
dmesg -T | grep BTRFS | grep 'check error' | grep path
Then reset error count to zero
btrfs device states -z /app
Then scrub again.
The issue of COW (copy on write), is fragmentation, because it always write new device block. This is good for SSD, but not good on traditional devices. Even on SSD, if the block size is big, the data to be write would be much larger than actual updated data size. Because of this issue, recommented disable Copy-On-Write on database and VM filesystems.
Disable it by mounting with nodatacow.
Following facts to be considered
For an empty file, add the NOCOW file attribute (use chattr utility with +C)
touch file1
chattr +C file1
For a directory with NOCOW attribute set, new files in it will inherit this attribute.
chattr +C directory1
For old files, copy the original data into the pre-created file, delete original and rename back.
touch vm-image.raw
chattr +C vm-image.raw
fallocate -l10g vm-image.raw
Subvolume can not be set nocow separately. This is official answer.
But the files created inherit the attributes from directory, if separately mount subvolume on the directory which has nocow attribute, then the newly created files will inherit nocow attribute as well, regardless of the original volume.
Create directory
mkdir /var/lib/nocow
chattr +C /var/lib/nocow
Create subvolume
mount -o autodefrag,compress=lzo,noatime,space_cache /dev/mapper/zpool1 /mnt/zpool1
btrfs subvolume create /mnt/zpool1/nocow
Mount subvolume
/dev/mapper/zpool1 /var/lib/nocow btrfs rw,noatime,compress=lzo,space_cache,autodefrag,subvol=nocow 0 0
No checksum, no integrity.
Nodatacow bypasses the very mechanisms that are meant to provide consistency in the filesystem, because the CoW operations are achieved by constructing a completely new metadata tree containing both changes (references to the data, and the csum metadata), and then atomically changing the superblock to point to the new tree.
With nodatacow, writing data and checksum on the physical medium, cause two writes separately. This could cause the data and the checksum mismatch due to I/O error, file corruption could happen.