Category: Todo

Bcache Basic

Bcache Basic


After finished first round of writing, I think it needs to be described in clearer way.

During first time setup, I got very confused by the document I was using, and didn't understand fully. After observation of the devices behavior, I think I will review this document later.


apt install bcache-tools


Bcache adds in one more layer between the actual filesystem and the block device (partition, raid, etc.) which filesystem located in. This is done by relocating filesystem header behind bcache header in block device, which offset the ordinary filesystem header and data 8KiB behind.

Note: Use partition as example

Ordinary filesystem partition = (Ordinary filesystem header + Ordinary filesystem data)
bcache partition = bcache header (8KiB) + (Ordinary filesystem header + Ordinary filesystem data)
                                                                  bcache data

In such case, the device (partition) is represented as a bcache partition, and bcache driver creates a new device called /dev/bcacheX without bcache header, then OS will detect /dev/bcacheX as ordinary filesystem.

This method is widely used in disk encryption as well, which allows encryption driver translates encrypted data device to OS in a newly created device.

Mount a filesystem bypass bcache header

By using following method, the ordinary filesystem in bcache partition can be mounted by ordinary filesystem driver without bcache driver.

  • Create Loopback Device
losetup -o 8192 /dev/loop0 /dev/[BCACHE DEVICE]
  • Mount Device
mount /dev/loop0 -o loop /mnt/[LOCATION]


Backing Device

Backing device is the actual device holding data, for example, /dev/sdb1, it can be 10TB hard disk, disk raid, software raid, etc. It can be created by following command

make-bcache -B /dev/sdb1

It will create a bcache device as well, such as /dev/bcache0

Bcache device

Bcache device is the device created together with backing device (/dev/sdb1), such as /dev/bcache0, all ordinary filesystem operation will be operated on bcache device (/dev/bcache0), such as mkfs, etc.

Caching Device

Caching device is the temporary device used as cache, for example /dev/sdc1, it can be 128GB SSD. It can be created using following command

make-bcache -C /dev/sdc1

It has cset.uuid as below

# bcache-super-show /dev/sdc1 | grep cset
cset.uuid       f0e01318-f4fd-4fab-abbb-d76d870503ec

Before recreate caching device, use following command clean up the device header.

wipefs -a /dev/<device>


Attach action allows caching device starts working for backing device.

# echo <caching_device_uuid> > /sys/block/<bcache_device>/bcache/attach
echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# or to the backing device
echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/sdb/sdb1/bcache/attach

Note: If following error occurred, and no output when ran bcache-status command, then run partprobe command to rescan partition tables

# bcache-status
# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
-bash: echo: write error: No such file or directory
# partprobe

*Note: bcache-status is a free bcache tool can be downloaded from github.

Operation on backing device and bcache device

The backing device and the bcache device have 1-to-1 relationship, because they are created at same time using one make-bcache -C command.

In fact, both bcache folders in both bcache and backing device are the same.

# ls -Hdi /sys/block/sdb/sdb1/bcache
64954 /sys/block/sdb/sdb1/bcache
# ls -Hdi /sys/block/bcache0/bcache
64954 /sys/block/bcache0/bcache

I think the correct thinking should be attaching to bcache device, but the bcache folder is created under backing device.

# readlink -f /sys/block/bcache0/bcache

I think this is because that bcache device is a virtual device created during backing device (real device) creation, so the actual device structure should assign to real device (backing device).

With or without caching device

Without caching device, bcache driver will directly translate ordinary filesystem driver read/write into bcache device read/write.

With cache device, bcache driver will utilize caching device before backing device operation.

ordinary filesystem operation => bcache driver => bcache filesystem operation => backing device
                                 caching device

So, without caching device, bcache is still operating correctly.

Attach caching device to multiple bcache devices

One caching device can support multiple bcache devices as below

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache1/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache2/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache3/bcache/attach
# echo 75ff0598-7624-46f6-bcac-c27a3cf1a09f > /sys/block/bcache4/bcache/attach

A bcache device has only one caching device

The uuid of caching device which attached to bcache device can be found as below.

# ls -la /sys/block/<device>/bcache/cache
lrwxrwxrwx 1 root root 0 Jun 19 18:42 /sys/block/<device>/bcache/set -> ../../../../../../../../fs/bcache/<UUID>


To detach a caching device, needs to send 1 or cache-set-uuid to bcache device or backing device

Safely remove the caching device from bcache device

echo cache-set-uuid > /sys/block/bcache0/bcache/detach
# or
echo cache-set-uuid > /sys/block/sdb/sdb1/bcache/detach

Detach the caching device from bcache device

echo 1 > /sys/block/bcache0/bcache/detach
# or
echo 1 > /sys/block/sdb/sdb1/bcache/detach


bcache/backing device

Stop a bcache device, is the same as stop the backing device.

echo 1 > /sys/block/bcache0/bcache/stop
# or
echo 1 > /sys/block/sdb/sdb1/bcache/stop

After stopped bcache/backing device,

  • The /sys/block/sdb/sdb1/bcache folder disappears
  • The /sys/block/bcache0 virtual device disappers
  • But no impact to caching device, and still registered in /sys/fs/bcache/<uuid>.

caching device

Stop caching device, will impact all caching, bcache and backing devices

echo 1 > /sys/fs/bcache/cache-set-uuid/stop

After stopped caching device, all bcache setup related to that caching device disapper

  • The /sys/block/sdb/sdb1/bcache folder disappers if /dev/sdb1 is backing device attached
  • The bcache devices /sys/block/bcache0 disappers if /dev/bcache0 is the bcache device attached
  • The caching device /sys/fs/bcache/<uuid> disappers

To resume

The first way to resume whole setup, can be done by run partprobe.

The second way is using register to resume device one by one.


In fact, registering is needed every bootup, but attaching only have to be done once.

Register is required if the caching or backing device missing during system start up or they are stopped manually.

Register the backing device as below

echo /dev/sdb1 > /sys/fs/bcache/register     # backing device

After registered, the system will

  • Creates /sys/block/sdb/sdb1/bcache folder
  • If didn't attach caching device before stopped or caching device had been registered, then creates /sys/block/bcache0 virtual device

If /sys/block/bcache0 is not created due to missing caching device,

  • The /sys/block/bcache0 device will be created after register missing caching device
  • Or use following command to force to create /sys/block/bcache0 and start running
echo 1 > /sys/block/sdb/sdb1/bcache/running

Warning: If force start, all write cache in caching device will be lost, this can cause filesystem corruption

Register caching device

To register caching device, following command can be used

echo /dev/sdc1 > /sys/fs/bcache/register     # caching device

It will create directory /sys/fs/bcache/<uuid> folder.

If attached backing device had been registered, the /sys/block/bcache0 will be created and running.

Ordinary filesystem operation

All ordinary filesystem operations will be operated on bcache device (/dev/bcache0), for example

mkfs.btrfs /dev/bcache0
mount /dev/bcache0 /mnt

Caching state

The caching state can be viewed using following command

cat /sys/block/bcache0/bcache/state


  • no cache: this means you have not attached a caching device to your backing bcache device
  • clean: this means everything is ok. The cache is clean.
  • dirty: this means everything is setup fine and that you have enabled writeback and that the cache is dirty.
  • inconsistent: you are in trouble because the backing device is not in sync with the caching device

Caching mode

There are 4 caching modes, writethrough, writeback, writearound, and none.

echo writeback > /sys/block/bcache0/bcache/cache_mode

Show caching device info

bcache-super-show /dev/sdXY

Writeback Percent

echo 100 > /sys/block/bcache0/bcache/writeback_percent

Dirty data

How much data in cache has not written into backing device.

cat /sys/block/sda/sda3/bcache/dirty_data

Flush cache to backing device

This might be required if filesystem maintenance needed.

Run following command to disable writeback mode

echo writethrough > /sys/block/bcache0/bcache/cache_mode

Wait until state reports "clean"

watch cat /sys/block/bcache0/bcache/state

Force flush of cache to backing device

echo 0 > /sys/block/bcache0/bcache/writeback_percent


The /sys/fs/bcache/ folder does not exist

The bcache module was not loaded.

sh: echo: write error: Invalid argument

If dmesg shows

bcache: bch_cached_dev_attach() Couldn't attach sdc: block size less than set's block size

Then the --block 4k parameter was not set on either device and defaults can mismatch.

Otherwise, the device might already be attached.

sh: echo: write error: No such file or directory

The UUID is not a valid cache.

Other considerations

Boot from bcache device

Grub2 does not offer support for bcache, but it is fully supported by UEFI. Check the following link for details


A block layer cache (bcache)

Troubleshooting ping drop packet with same interval

Troubleshooting ping drop packet wit same interval

The issue appear between 10G Qnap switch and the TPlink router. TPLink has a 2.5GB ethernet, which connects to 10G ethernet of Qnap switch. Sometimes, ping drop package, they have almost same interval!

% ping 
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.464 ms
Request timeout for icmp_seq 1
64 bytes from icmp_seq=2 ttl=64 time=0.431 ms
64 bytes from icmp_seq=3 ttl=64 time=0.399 ms
64 bytes from icmp_seq=4 ttl=64 time=0.302 ms
64 bytes from icmp_seq=5 ttl=64 time=0.356 ms
64 bytes from icmp_seq=6 ttl=64 time=0.461 ms
64 bytes from icmp_seq=7 ttl=64 time=0.495 ms
64 bytes from icmp_seq=8 ttl=64 time=0.450 ms
64 bytes from icmp_seq=9 ttl=64 time=0.573 ms
64 bytes from icmp_seq=10 ttl=64 time=0.282 ms
64 bytes from icmp_seq=11 ttl=64 time=0.374 ms
64 bytes from icmp_seq=12 ttl=64 time=0.604 ms
64 bytes from icmp_seq=13 ttl=64 time=0.438 ms
64 bytes from icmp_seq=14 ttl=64 time=0.418 ms
64 bytes from icmp_seq=15 ttl=64 time=0.446 ms
64 bytes from icmp_seq=16 ttl=64 time=0.570 ms
64 bytes from icmp_seq=17 ttl=64 time=0.753 ms
64 bytes from icmp_seq=18 ttl=64 time=0.456 ms
64 bytes from icmp_seq=19 ttl=64 time=0.530 ms
64 bytes from icmp_seq=20 ttl=64 time=0.531 ms
64 bytes from icmp_seq=21 ttl=64 time=0.480 ms
64 bytes from icmp_seq=22 ttl=64 time=0.498 ms
64 bytes from icmp_seq=23 ttl=64 time=0.498 ms
64 bytes from icmp_seq=24 ttl=64 time=0.465 ms
Request timeout for icmp_seq 25
64 bytes from icmp_seq=26 ttl=64 time=0.493 ms
64 bytes from icmp_seq=27 ttl=64 time=0.520 ms
64 bytes from icmp_seq=28 ttl=64 time=0.462 ms
64 bytes from icmp_seq=29 ttl=64 time=0.459 ms
64 bytes from icmp_seq=30 ttl=64 time=0.535 ms
64 bytes from icmp_seq=31 ttl=64 time=0.468 ms
64 bytes from icmp_seq=32 ttl=64 time=0.505 ms
64 bytes from icmp_seq=33 ttl=64 time=0.539 ms
64 bytes from icmp_seq=34 ttl=64 time=0.515 ms
64 bytes from icmp_seq=35 ttl=64 time=0.504 ms
64 bytes from icmp_seq=36 ttl=64 time=0.519 ms
64 bytes from icmp_seq=37 ttl=64 time=0.415 ms
64 bytes from icmp_seq=38 ttl=64 time=0.415 ms
64 bytes from icmp_seq=39 ttl=64 time=0.384 ms
64 bytes from icmp_seq=40 ttl=64 time=0.443 ms
64 bytes from icmp_seq=41 ttl=64 time=0.456 ms
64 bytes from icmp_seq=42 ttl=64 time=0.349 ms
64 bytes from icmp_seq=43 ttl=64 time=0.345 ms
64 bytes from icmp_seq=44 ttl=64 time=0.272 ms
64 bytes from icmp_seq=45 ttl=64 time=0.456 ms
64 bytes from icmp_seq=46 ttl=64 time=0.523 ms
64 bytes from icmp_seq=47 ttl=64 time=0.553 ms
64 bytes from icmp_seq=48 ttl=64 time=0.389 ms
Request timeout for icmp_seq 49
64 bytes from icmp_seq=50 ttl=64 time=0.417 ms
64 bytes from icmp_seq=51 ttl=64 time=0.433 ms
64 bytes from icmp_seq=52 ttl=64 time=0.467 ms
64 bytes from icmp_seq=53 ttl=64 time=0.417 ms
--- ping statistics ---
54 packets transmitted, 51 packets received, 5.6% packet loss
round-trip min/avg/max/stddev = 0.272/0.461/0.753/0.083 ms

Possible issue

After a month, I found that in Qnap web console, the flow control on the switch port, always flicking, sometimes enable, sometimes disable. Due to this behavior, I think could be the issue with the connection between them could try to re-established again and again.

Then I disabled flow-control from switch side, because I can not find the port settings in TPlink router.

Flow control

Enable flow control is to reduce packet dropping, but auto-negotiate can cause issue. Most of time both ends of ethernet can leave to auto-negotiate, but prefer to set one side manual if possible, especially two side has different highest speed.


Flow Control

TODO: Move dataset to another zpool in TrueNAS

Move dataset to another zpool in TrueNAS

In Synology, move share folder to another volume is quite easy, can be done via UI interface. In TrueNAS, I could not find such task can be selected.

Duplicate dataset from snapshot

The workable solution is utilize the zfs command to duplicate in SSH environment, then export old pool and import new one.

First make a snapshot poolX/dataset@initial, then use following command duplicate zfs dataset snapshot to new zpool.

zfs send poolX/dataset@initial | zfs recv -F poolY/dataset

Update new dataset

Then make another snapshot poolX/dataset@incremental, then use following command update zfs dataset snapshot to new zpool.

zfs send -i initial poolX/dataset@incremental | zfs recv poolY/dataset

Activate new dataset

To make the new dataset usable, rollback snapshot needs to be performed for new dataset.

Update share

Change shared point to use new pool.

Update client

This is only required if client used server filesystem structure, such as NFS.


Migrate to smaller disk
*Note: pv (Pipe Viewer) command is not installed in TrueNAS by default.

TODO: Network boot for MacBook Pro

Network boot for MacBook Pro


Tried iPXE, but failed after boot into kernel file.

Successfully load boot files

Able to boot by given filenames using similar method as below in iPXE configuration file tftp/boot.ipxe.

initrd ubuntu/12.10-desktop-${cpu_name}/casper/initrd.lz
chain ubuntu/12.10-desktop-${cpu_name}/casper/vmlinuz root=/dev/nfs boot=casper netboot=nfs nfsroot=${cpu_name} quiet splash

The error shows some sort of issue related to invalid function. Internet users mentioned that it was caused by converting EFI boot to MBR boot in iPXE but firmware doesn't support it.

Able to boot into EFI disk

Looks like MacbookPro supports EFI disk boot only


Secure boot

Secure boot verifies the signature of boot software whether trusted by firmware. This issue had been fixed after copy workable boot partition from other bootable images, such as ubuntu, fedora, or windows boot image, include /boot and /boot/efi.

Read kernel

Got issue with this stag, kernel read, but execution error with invalid function, didn't have time to troubleshoot.


Grub boot

For network root partition boot, which has /bootlocally, root partition / on iSCSI disk, the grub should be configured as upgrade acceptable, including following requirements.

  • Kernel image should be a standard image to avoid manual kernel rebuilt process
  • Kernel image should include iSCSI driver
  • Kernel image should be able to configure fix IP Address, to avoid unstable iSCSI connection and unauthorized access
  • Kernel image should be able to configure bridge interface or macvlan interface, to support virtualization
  • Kernel image should be less network interface name dependent, to avoid network interface name changed

Network boot

For iPXE boot, iPXE firmware can be loaded by PXE boot process or a local disk, following requirements should be considered.

  • The kernel specification and detection are not part of iPXE configuration.
  • iPXE only detects iSCSI disk, and grub treats it as local disk, then boot from this local disk (iSCSI disk)
  • iPXE iSCSI disk should be able to be recognized by grub as local disk
  • Grub should not reset the network interface or renew IP address
  • MAC address should be the same in iPXE and Grub
  • OS should lock down the network interface, should not allow any services (Network Manager, etc) manage the interface.
  • OS should lock down iSCSI disk
  • Some requirements in Grub boot


Fix: System Found Unauthorized Changes on the Firmware, Operating System or UEFI Drivers

TODO: Using foobar2000 to verify music file integrity

Using foobar2000 to verify music file integrity

Couldn't found install component feature in MacOS version.

Download File Integrity Verifier

Download component from following location

File Integrity Verifier

Install component

Install component using following steps

  • Open the foobar2000 preferences dialog (click "File | Preferences" or use the CTRL+P keyboard shortcut).
  • Select the Components page.
  • Either click the Install... button and locate the component archive, or simply drag it on to the list.
  • Click OK ...
  • Restart.


Tested on following formats

  • flac


  • Failed to test on dff format.



TODO: Cannot set LC_CTYPE/LC_ALL to default locale: No such file or directory

Cannot set LC_CTYPE/LC_ALL to default locale: No such file or directory

Error description

Below error repeatly appears when run apt upgrade.

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_CTYPE = "UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
Scanning processes...
Scanning candidates...
Scanning linux images...
/usr/bin/locale: Cannot set LC_CTYPE to default locale: No such file or directory
/usr/bin/locale: Cannot set LC_ALL to default locale: No such file or directory

Check /etc/default/locale file,

#  File generated by update-locale

it doesn't contain following lines


Tried but failed

Tried to run following commands, the errors are still there.

locale-gen "en_US.UTF-8"
dpkg-reconfigure locales

Also added following lines in /etc/environment and /etc/default/locale, still failed


TODO: Change docker container mapping port

Change docker container mapping port

To change the running container mapping port with or without recreating container.

By recreating container

Stop and commit running container, then run new container using new image.

This requires changing image name and knowing the docker run command parameters.

docker stop test01
docker commit test01 test02
docker run -p 8080:8080 -td test02

Modify configuration file

Stop the container and docker service, then change the docker container configuration file hostconfig.json. After that, start docker service and container.

This requires updating docker run command document.

  1. Stop docker.
docker stop test01
systemctl stop docker
  1. Edit hostconfig.json file
vi /var/lib/docker/containers/[hash_of_the_container]/hostconfig.json

or following file when using snap

  1. Start docker
systemctl start docker
docker start test01

TODO: Synology SSD Cache Issues

Synology SSD Cache Issues

Synology SSD Cache have two issues as below

  • Unable to use one disk/array to support mulitiple volume.

    • No answer from Internet and some people mentioned that it is a new request.
    • Possible solution is to create partition/volume on SSD Storage Pool, then use volume as cache device.
    • Synology uses LVM cache, haven't checked whether native linux can do or not.
  • Utilization of cache is very low, about 5GB on fequently used volume, such as volume1.

    • Improved in DSM 7 which supports Pin all Btrfs metadata option. But haven't validated the utilization.