Category: Computer

Computer is miraculous!

Bcache Basic

Bcache Basic

Update

After finished first round of writing, I think it needs to be described in clearer way.

During first time setup, I got very confused by the document I was using, and didn't understand fully. After observation of the devices behavior, I think I will review this document later.

Install

apt install bcache-tools

Concept

Bcache adds in one more layer between the actual filesystem and the block device (partition, raid, etc.) which filesystem located in. This is done by relocating filesystem header behind bcache header in block device, which offset the ordinary filesystem header and data 8KiB behind.

Note: Use partition as example

Ordinary filesystem partition = (Ordinary filesystem header + Ordinary filesystem data)
bcache partition = bcache header (8KiB) + (Ordinary filesystem header + Ordinary filesystem data)
                                          -------------------------------------------------------
                                                                  bcache data

In such case, the device (partition) is represented as a bcache partition, and bcache driver creates a new device called /dev/bcacheX without bcache header, then OS will detect /dev/bcacheX as ordinary filesystem.

This method is widely used in disk encryption as well, which allows encryption driver translates encrypted data device to OS in a newly created device.

Mount a filesystem bypass bcache header

By using following method, the ordinary filesystem in bcache partition can be mounted by ordinary filesystem driver without bcache driver.

  • Create Loopback Device
losetup -o 8192 /dev/loop0 /dev/[BCACHE DEVICE]
  • Mount Device
mount /dev/loop0 -o loop /mnt/[LOCATION]

Devices

Backing Device

Backing device is the actual device holding data, for example, /dev/sdb1, it can be 10TB hard disk, disk raid, software raid, etc. It can be created by following command

make-bcache -B /dev/sdb1

It will create a bcache device as well, such as /dev/bcache0

Bcache device

Bcache device is the device created together with backing device (/dev/sdb1), such as /dev/bcache0, all ordinary filesystem operation will be operated on bcache device (/dev/bcache0), such as mkfs, etc.

Caching Device

Caching device is the temporary device used as cache, for example /dev/sdc1, it can be 128GB SSD. It can be created using following command

make-bcache -C /dev/sdc1

It has cset.uuid as below

# bcache-super-show /dev/sdc1 | grep cset
cset.uuid       f0e01318-f4fd-4fab-abbb-d76d870503ec

Before recreate caching device, use following command clean up the device header.

wipefs -a /dev/<device>

Attach

Attach action allows caching device starts working for backing device.

# echo <caching_device_uuid> > /sys/block/<bcache_device>/bcache/attach
echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# or to the backing device
echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/sdb/sdb1/bcache/attach

Note: If following error occurred, and no output when ran bcache-status command, then run partprobe command to rescan partition tables

# bcache-status
#
# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
-bash: echo: write error: No such file or directory
# partprobe

*Note: bcache-status is a free bcache tool can be downloaded from github.

Operation on backing device and bcache device

The backing device and the bcache device have 1-to-1 relationship, because they are created at same time using one make-bcache -C command.

In fact, both bcache folders in both bcache and backing device are the same.

# ls -Hdi /sys/block/sdb/sdb1/bcache
64954 /sys/block/sdb/sdb1/bcache
# ls -Hdi /sys/block/bcache0/bcache
64954 /sys/block/bcache0/bcache

I think the correct thinking should be attaching to bcache device, but the bcache folder is created under backing device.

# readlink -f /sys/block/bcache0/bcache
/sys/devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb/sdb1/bcache

I think this is because that bcache device is a virtual device created during backing device (real device) creation, so the actual device structure should assign to real device (backing device).

With or without caching device

Without caching device, bcache driver will directly translate ordinary filesystem driver read/write into bcache device read/write.

With cache device, bcache driver will utilize caching device before backing device operation.

ordinary filesystem operation => bcache driver => bcache filesystem operation => backing device
                                       ||
                                       ||
                                 caching device

So, without caching device, bcache is still operating correctly.

Attach caching device to multiple bcache devices

One caching device can support multiple bcache devices as below

# echo f0e01318-f4fd-4fab-abbb-d76d870503ec > /sys/block/bcache0/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache1/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache2/bcache/attach
# echo 4b05ce02-19f4-4cc6-8ca0-1f765671ceda > /sys/block/bcache3/bcache/attach
# echo 75ff0598-7624-46f6-bcac-c27a3cf1a09f > /sys/block/bcache4/bcache/attach

A bcache device has only one caching device

The uuid of caching device which attached to bcache device can be found as below.

# ls -la /sys/block/<device>/bcache/cache
lrwxrwxrwx 1 root root 0 Jun 19 18:42 /sys/block/<device>/bcache/set -> ../../../../../../../../fs/bcache/<UUID>

Detach

To detach a caching device, needs to send 1 or cache-set-uuid to bcache device or backing device

Safely remove the caching device from bcache device

echo cache-set-uuid > /sys/block/bcache0/bcache/detach
# or
echo cache-set-uuid > /sys/block/sdb/sdb1/bcache/detach

Detach the caching device from bcache device

echo 1 > /sys/block/bcache0/bcache/detach
# or
echo 1 > /sys/block/sdb/sdb1/bcache/detach

Stop

bcache/backing device

Stop a bcache device, is the same as stop the backing device.

echo 1 > /sys/block/bcache0/bcache/stop
# or
echo 1 > /sys/block/sdb/sdb1/bcache/stop

After stopped bcache/backing device,

  • The /sys/block/sdb/sdb1/bcache folder disappears
  • The /sys/block/bcache0 virtual device disappers
  • But no impact to caching device, and still registered in /sys/fs/bcache/<uuid>.

caching device

Stop caching device, will impact all caching, bcache and backing devices

echo 1 > /sys/fs/bcache/cache-set-uuid/stop

After stopped caching device, all bcache setup related to that caching device disapper

  • The /sys/block/sdb/sdb1/bcache folder disappers if /dev/sdb1 is backing device attached
  • The bcache devices /sys/block/bcache0 disappers if /dev/bcache0 is the bcache device attached
  • The caching device /sys/fs/bcache/<uuid> disappers

To resume

The first way to resume whole setup, can be done by run partprobe.

The second way is using register to resume device one by one.

Register

In fact, registering is needed every bootup, but attaching only have to be done once.

Register is required if the caching or backing device missing during system start up or they are stopped manually.

Register the backing device as below

echo /dev/sdb1 > /sys/fs/bcache/register     # backing device

After registered, the system will

  • Creates /sys/block/sdb/sdb1/bcache folder
  • If didn't attach caching device before stopped or caching device had been registered, then creates /sys/block/bcache0 virtual device

If /sys/block/bcache0 is not created due to missing caching device,

  • The /sys/block/bcache0 device will be created after register missing caching device
  • Or use following command to force to create /sys/block/bcache0 and start running
echo 1 > /sys/block/sdb/sdb1/bcache/running

Warning: If force start, all write cache in caching device will be lost, this can cause filesystem corruption

Register caching device

To register caching device, following command can be used

echo /dev/sdc1 > /sys/fs/bcache/register     # caching device

It will create directory /sys/fs/bcache/<uuid> folder.

If attached backing device had been registered, the /sys/block/bcache0 will be created and running.

Ordinary filesystem operation

All ordinary filesystem operations will be operated on bcache device (/dev/bcache0), for example

mkfs.btrfs /dev/bcache0
mount /dev/bcache0 /mnt
...

Caching state

The caching state can be viewed using following command

cat /sys/block/bcache0/bcache/state

Output:

  • no cache: this means you have not attached a caching device to your backing bcache device
  • clean: this means everything is ok. The cache is clean.
  • dirty: this means everything is setup fine and that you have enabled writeback and that the cache is dirty.
  • inconsistent: you are in trouble because the backing device is not in sync with the caching device

Caching mode

There are 4 caching modes, writethrough, writeback, writearound, and none.

echo writeback > /sys/block/bcache0/bcache/cache_mode

Show caching device info

bcache-super-show /dev/sdXY

Writeback Percent

echo 100 > /sys/block/bcache0/bcache/writeback_percent

Dirty data

How much data in cache has not written into backing device.

cat /sys/block/sda/sda3/bcache/dirty_data

Flush cache to backing device

This might be required if filesystem maintenance needed.

Run following command to disable writeback mode

echo writethrough > /sys/block/bcache0/bcache/cache_mode

Wait until state reports "clean"

watch cat /sys/block/bcache0/bcache/state

Force flush of cache to backing device

echo 0 > /sys/block/bcache0/bcache/writeback_percent

Errors

The /sys/fs/bcache/ folder does not exist

The bcache module was not loaded.

sh: echo: write error: Invalid argument

If dmesg shows

bcache: bch_cached_dev_attach() Couldn't attach sdc: block size less than set's block size

Then the --block 4k parameter was not set on either device and defaults can mismatch.

Otherwise, the device might already be attached.

sh: echo: write error: No such file or directory

The UUID is not a valid cache.

Other considerations

Boot from bcache device

Grub2 does not offer support for bcache, but it is fully supported by UEFI. Check the following link for details

https://wiki.archlinux.org/title/Bcache

References

Bcache
A block layer cache (bcache)
bcache-status

QEMU agent `qemu-ga` on ubuntu with high CPU utilization

QEMU agent qemu-ga on ubuntu with high CPU utilization

The qemu-ga process has 99% CPU utilization consistently.

Fix by restart

Restart qemu-qa agent

Fix by uninstall

In Proxmox VE, the qemu-guest-agent is used for mainly two things:

  • To properly shutdown the guest, instead of relying on ACPI commands or windows policies
  • To freeze the guest file system when making a backup (on windows, use the volume shadow copy service VSS).

If uninstall as the permanent solution, make sure that untick the Use QEMU Guest Agent under VM options.

References

qemu-agent 100% CPU usage
Qemu-guest-agent

Exclude files when run `rmlint`

Exclude files when run rmlint

To exclude files, using find command, then pass the parameter - to rmlint as folder name

$ find /target/dir -type f ! -name '*.nib' ! -name '*.icon' ! -name '*.plist' | rmlint [options] -

For only search specific type of file, can use following command:

find /mm -iname "*.DFF" -type f | rmlint -T df --config=sh:handler=hardlink -

References

How do I exclude/ignore specific file types/extensions with rmlint?

Update NextCloudPi

Update NextCloudPi

If go thru NextCloudPI WebUI to update NextCloudPi or NextCloud running by NextCloudPi in Docker, errors could be occurred, which can lead the user not found error. The correct way to do is recreate container using new image.

Note: /data in container must be mapped or backed up.

Update NextCloudPI Image

docker image pull ownyourbits/nextcloudpi-x86

Remove existing container

docker stop nextcloudpi
docker rm nextcloudpi

Recreate container

Use the previous docker parameter, make sure /data was mapped.

docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /app/nc/data:/data --name nextcloudpi ownyourbits/nextcloudpi-x86 $IP

Update NextCloudPi

Login to NextCloudPi, the NextCloudPi will update itself, need to wait for the process pigz completed.

Restore data from backup (If required)

Go to Backup => nc-backup-auto in NextCloudPi WebUI, to find out the backup path, and restore the latest if required.

Manual Update NextCloud

Go to Updates => nc-update-nextcloud in NextCloudPi WebUI, to update NextCloud Manually.

Install Ansible for Ubuntu

Install Ansible for Ubuntu

Install pip

Check pip installed

python3 -m pip -V

Install pip using root

apt install python3-pip

Install ansible

Install ansible using normal user

python3 -m pip install --user ansible

Note: ignore the warning message about .local/bin, the folder will be created and will be in the path after re-login

Upgrade ansible

python3 -m pip install --upgrade --user ansible

References

Installing Ansible

Convert Ubuntu VM to Proxmox

Convert Ubuntu VM to Proxmox

This is to describe how to convert Ubuntu VM to Proxmox.

VM creation

Following hardware options can be considered

  • BIOS: SeaBIOS (Should be able to see Grub menu)
  • Machine: Default (i440fx)
  • SCSI Controller: VirtIO SCSI (It might not be used as sata0 to be considered for disk)
  • Hard Disk (sata0): disk_image_file
  • Network Device (net0): vmxnet3=<mac_address> (This is default for VMware, can use other type too)

Convert the VMware disk to Proxmox disk and attach the disk to new VM

qm importdisk 121 ubuntu.vmdk pool240ssd --format qcow2

Attach the disk as sata0.

Boot

After boot up system show a GUI error screen, press Contrl + Alt + F3 to switch to console mode.

Note: Press Shift to active Grub Menu if required

Network

Find out new network interface UUID

nmcli conn

Change NetworkManager file name

cd /etc/NetworkManager/system-connections
mv Wired\ connection\ 1-<old_uuid>.nmconnection Wired\ connection\ 1-<new_uuid>.nmconnection

Update nmconnection file

[connection]
id=<new_interface_name>
uuid=<new_uuid>
type=ethernet
autoconnect-priority=-999
interface-name=<new_interface_name>
permissions=
timestamp=1628151710

[ethernet]
mac-address-blacklist=

[ipv4]
address1=192.168.1.232/24,192.168.1.254
dns=192.168.1.250;8.8.8.8;
dns-search=
method=manual

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=disabled

[proxy]

Errors

No login GUI

After boot, only a white screen with error message appears, this was fixed by running apt update and upgrade

First, update /etc/apt/sources.list file, replace all repo URL to old-releases.ubuntu.com

Then run following commands

apt update
apt upgrade -y

References

Update Password Life Time in Oracle Database

Update Password Life Time in Oracle Database

For testing database, disable password life time will be able to avoid following error

ORA-28002: the password will expire within 7 days

Login as sys

Using sys password (created during database instance creation) to login to sysdba

sqlplus sys@xepdb1 as sysdba

Find out profile for the user

select username, profile from dba_users where username like '<user_id>';

Check profile settings

select RESOURCE_NAME,resource_type,LIMIT from dba_profiles where PROFILE='<profile_name>';

Update profile

ALTER PROFILE <profile_name> LIMIT PASSWORD_LIFE_TIME UNLIMITED;

Check the password status

SELECT ACCOUNT_STATUS FROM DBA_USERS WHERE USERNAME='<user_id>'

Change password

# sqlplus <user_id>/<pwd>@xepdb1
...

SQL> password
Changing password for <user_id>
Old password:
New password:
Retype new password:
Password changed
SQL> quit

References

How to resolve ORA-28002: the password will expire

Convert Oracle Linux 7.9 to Proxmox

Convert Oracle Linux 7.9 to Proxmox

This is to describe how to convert Oracle Linux 7.9 to Proxmox.

VM creation

Following hardware options can be considered

  • BIOS: SeaBIOS (Should be able to see Grub menu)
  • Machine: Default (i440fx)
  • SCSI Controller: VirtIO SCSI (It might not be used as sata0 to be considered for disk)
  • Hard Disk (sata0): disk_image_file
  • Network Device (net0): vmxnet3=<mac_address> (This is default for VMware, can use other type too)

Convert the VMware disk to Proxmox disk and attach the disk to new VM

qm importdisk 121 oracle18c.vmdk pool240ssd --format qcow2

Attach the disk as sata0.

Boot

Select item Oracle Linux Server (0-rescue-ed95572bd80641d79f83cd91e03c0283 with Linux) 7.9 from Grub menu to boot into rescue mode.

Note: Tried other option, all got error and unable to boot

Kernel

Find Kernel Package

Login as valid user, then find out the kernel to be used

rpm -q -a | grep kernel | sort

Got following list

kernel-3.10.0-1160.25.1.el7.x86_64
kernel-3.10.0-1160.36.2.el7.x86_64
kernel-3.10.0-1160.el7.x86_64
kernel-tools-3.10.0-1160.36.2.el7.x86_64
kernel-tools-libs-3.10.0-1160.36.2.el7.x86_64
kernel-uek-5.4.17-2102.203.6.el7uek.x86_64
kernel-uek-5.4.17-2102.204.4.2.el7uek.x86_64
kernel-uek-5.4.17-2102.204.4.4.el7uek.x86_64

Choose the latest one, which is also Unbreakable Enterprise Kernel

Recreate Grub kernel files

Find scripts in Kernel package

rpm -q kernel-uek-5.4.17-2102.204.4.4.el7uek.x86_64 --scripts

Following posttrans scriptlet shown

...
posttrans scriptlet (using /bin/sh):
/usr/sbin/new-kernel-pkg --package kernel --mkinitrd --dracut --depmod --update 5.4.17-2102.204.4.4.el7uek.x86_64 || exit $?
/usr/sbin/new-kernel-pkg --package kernel --rpmposttrans 5.4.17-2102.204.4.4.el7uek.x86_64 || exit $?
...

Run above commands to rebuild Grub files, then reboot the system to the menu with kernel recreated.

Note: The error Unable to open file: /etc/keys/x509_ima.der (-2) can be ignored

Network

You can reconfigure network interface the same as VMware, this is to avoid reconfiguration of network settings

Interface Type

Check VMWare vmx file to find disk type, then set the same in Proxmox

ethernet0.virtualDev = "vmxnet3"

Mac Address

You can change Mac Address using the value in VMWare configuration

ethernet0.generatedAddress = "00:11c:22:33:44:55"

Interface Name

Find out the interface in /etc/sysconfig/network-scripts as below.

/etc/sysconfig/network-scripts/ifcfg-ens192

The interface name is ifcfg-ens192

Create the file /etc/udev/rules.d/70-custom-ifnames.rules with the following contents:

SUBSYSTEM=="net",ACTION=="add",ATTR{address}=="00:11:22:33:44:55",ATTR{type}=="1",NAME="ens192"

Then reboot the server, then check the Interface and IP address using ip a command.

References

Consistent network interface device naming

Remove orphan disks in Proxmox

Remove orphan disks in Proxmox

If you canceled disk movement in Proxmox, an orphaned disk will be created. In such case, it will not be shown in VM hardware configuration, and it can not be removed from storage session. If you try to remove it, you will have an error as the disk is attached to a VM.

In order to remove it, rescan disk is required.

Rescan

Use following command can make the orphan disk reattached to the VM.

qm rescan --vmid <vm_id>

*Note: rescan should be done in the Proxmox node which contains VM configuration, otherwise, could not find the VM configuration file error will appear.

References

https://forum.proxmox.com/threads/cancelled-disk-move-orphaned-disk.96650/