Category: Computer

Computer is miraculous!

Enable zRAM as swap in Linux

Enable zRAM as swap in Linux

The problem with swap on SD boot OS, such as Raspberry Pi 4, is slow and increase SD write counts, in fact, SD card is slower than hard disk and expensive. For Raspberry Pi 4, it has 8 GB ram, enough for normal operation, but if don't turn on swap, there is no visibility of current memory usage whether causing memory swapping.

Traditional swap space

Fixed swap partition is rquired if use traditional swap space. Some facts as below

  • Fixed swap partition is rquired
  • Hard to resize or move
  • Waste storage space if it is not using most of time

Loopback device as swap

To have dynamic swap device, create a regular file and make it as loopback block device for swap, is a solution to have no fixed partition. The steps as below.

  • Create a file with fixed size using dd or some other commands
  • Create loopback device on newly created file
  • Init swap on loopback device using mkswap command
  • Change /etc/fstab to point to the new device

Issue as below

  • The loopback device needs to be initialized everytime after reboot

File as swap

In fact, swap can be directly created on file as below.

  • Create a file with fixed size using dd or some other commands
  • Init swap on file using mkswap command
  • Change /etc/fstab to point to the that file

Issue as below

  • Still wasting space if swap is not using
  • Hard to adjust size
  • Manual tasks involved

dphys-swapfile

The dphys-swapfile package can be installed to automate the tasks described above. It is not an entry in /etc/fstab, but a service.

  • Install dphys-swapfile package
  • Adjust config in /etc/dphys-swapfile
  • Enable dphys-swapfile service
  • Can run dphys-swapfile <swapon|swapoff> command

Issue as below

  • Still wasting space if swap is not using

zRAM

The zRAM module is installed by default, service is using systemd.

  • Check zram module available
modprobe zram
lsmod | grep zram
  • Add module and set module options
echo zram > /etc/modules-load.d/zram.conf
echo "options zram num_devices=1" > /etc/modprobe.d/zram.conf
  • Create zram0 device when booting by adding following line in /etc/udev/rules.d/99-zram.rules
KERNEL=="zram0", ATTR{disksize}="512M",TAG+="systemd"
  • Create systemd service file /etc/systemd/system/zram.service
[Unit]
Description=Swap with zram
After=multi-user.target

[Service]
Type=oneshot 
RemainAfterExit=true
ExecStartPre=/sbin/mkswap /dev/zram0
ExecStart=/sbin/swapon /dev/zram0
ExecStop=/sbin/swapoff /dev/zram0

[Install]
WantedBy=multi-user.target
  • Enable service, then reboot
sudo systemctl enable zram
  • Check swaps
cat /proc/swaps
swapon -s

Issue with zram

  • When memory not enough, then use swap space, but swap uses ram
  • It is the same solution as compress ram

References

How to enable the zRAM module for faster swapping on Linux

Systemd services for user in Linux

Systemd services for user in Linux

The traditional way of starting up program after user login, is using user profile. The systemd provides a new way for such tasks.

Usage

The systemd regular services are running as root privileges, unless User value in Service session. They are triggerred as background jobs, no matter user login or not. The systemd user services are running for user and run as that user id, and they are triggered after that user login.

Definition

To define services run as a normal user, they can be defined in user's home directory in ~/.config/systemd/user folder, they will be picked up by systemd as a user service.

Managing

To manage these services, folowing commands can be used.

Check all systemd services for user

systemctl status --user

Enable and start up

systemctl --user enable myuser.service
systemctl --user start myuser.service

Reload all systemd configuration. It is required after service definition files modified.

systemctl --user daemon-reload

For all users

The /etc/systemd/user/ folder is to define services for all users. The default available user services definition files are in /usr/lib/systemd/user/ folder, they can be used to enable systemd user service. For example,

# ls /usr/lib/systemd/user/syncthing.service
syncthing.service
# systemctl --user status syncthing
Unit syncthing.service could not be found.
# systemctl status syncthing
* syncthing.service - Syncthing - Open Source Continuous File Synchronization
...

Other systemd user definition file locations can be defined by administrator

$XDG_RUNTIME_DIR/systemd/user/
~/.local/share/systemd/user/

Common usage

The most common usage of systemd user servers, are X window related processes, they need to be run after user login, running as background services for user, such as reminder, window manager, etc., but not the background services for system.

References

systemd user services and systemctl --user
What does "systemctl daemon-reload" do?

Add bluetooth device from ubuntu console

Add bluetooth device from ubuntu console

I used following steps to add bluetooth keyboard.

Steps

  • Run bluetoothctl, then get following prompt
[bluetooth]# 
  • Run following commands to initialize bluetooth
power on
agent on
default-agent
scan on
  • Find the bluetooth device mac address

  • Run following command to connect to it

trust XX:XX:XX:XX:XX:XX
pair XX:XX:XX:XX:XX:XX
connect XX:XX:XX:XX:XX:XX
  • Then disable scan and quit
scan off
exit

References

How to connect bluetooth headset via command line on ubuntu 12.04

Verify package using debsums

Verify package using debsums

Verify every installed package

debsums

Verify every installed package (including configuration files).

debsums -a

Verify installed packages and report errors only

debsums -s

Verify every installed package and report changed files only

debsums -c

Verify every installed package (including configuration files) and report changed files only.

debsums -ca

Verify every installed package and report changed configuration files only.

sudo debsums -ce

Verify specific package

debsums -a bash

Create mismatch list

dpkg-query -S $(sudo debsums -c 2>&1 | sed -e "s/.*file \(.*\) (.*/\1/g") | cut -d: -f1 | sort -u

To reinstall them

apt-get install --reinstall <package name>

References

How to verify installed packages

Hot swapable Keychron keyboard issues

Hot swapable Keychron keyboard issues

Just got Keychron keyboard, with hot swap, which can easily switch between two Mac machines easily. Some issues struggle me for a while.

No eject button

The major difference between normal Mac keyboard and Keychron keyboard is reject button, so need to use another combination of keys for sleep instead, which uses power button. But my old iMac power button has issue as well.

Then when I put iMac to sleep, then try to switch to Mac Mini, the keyboard wakes iMac up. To overcome this issue, I tried to use mouse, but I can not move the mouse as well, it also wakes iMac up.

After search internet, people give a solution, that is using mouse to sleep, then lift it up, after that make it upside down. Then I did the same thing, except switch it off, because my mouse got light.

Switch between MacOS and Windows or Linux

Because of the switching between MacOS and Windows or Linux via a physical button, it isn't that easy, and the manual mentions do not do it often, otherwise, can cause issue.

References

Shortcut key to make my macbook sleep?

The most insane issue with TrueNAS

The most insane issue with TrueNAS

This morning, I saw the login screen of my TrueNAS, so decided to have a look. After login, the TrueNAS rebooted...

This is really a design issue, both shutdown and reboot are not well designed, the URL can be reused without any warning prompt.

In fact, I knew this issue, but only careful enough just after reboot or shutdown performed. After yesterday's reboot, I didn't try to login using UI URL.

Although I careful enough, this issue leads me avoid using Back button of browser, because the URL can be in history.

The solution can be very easy, just change GET method to POST method in both reboot and shutdown pages with addition variable. But when will they make such change as it is already a mature product for years.

ZFS useful commands

ZFS useful commands

Create pool

Storage providers

Storage provides are spinning disks or SSDs.

ls -al /dev/ada?

Vdevs

Vdevs are grouping of storage providers into various RAID configurations.

RAID 0 or Stripes

Create stripes pool

zpool create OurFirstZpool ada1 ada2 ada3

RAID 1 or Mirror

Create mirror vdev and add into pool

zpool create tank mirror ada1 ada2 ada3

Create another group of mirror vdev and add into existing pool

zpool add tank mirror ada4 ada5 ada6

Detach a disk from vdev

zpool detach tank ada4

RAID-Z1, RAID-Z2 and RAID-Z3

Create RAID-Z1 vdev and add into pool

zpool create tank raidz1 ada1 ada2 ada3

Create a RAID-Z1 vdev and add into existing pool

zpool add tank raidz1 ada4 ada5 ada6

Zpools

Zpools are aggregation of vdevs into a single storage pools

Create pool

zpool create OurFirstZpool ada1 ada2 ada3

Verify pool

zpool status

Add a new disk (vdev) to increase space

zpool add OurFirstZpool ada4

Z-Filesystems

Z-Filesystems are datasets with cool features like compression and reservation.

Create dataset

zfs create OurFirstZpool/dataset1

List dataset

zfs list

Zvols

Change max arc size on TrueNAS SCALE

Change max arc size on TrueNAS SCALE

After upgrade memory to 64GB, the memory usage is less than 32GB even run two VMs together. To utilize all memory, increase zfs cache size is one of the solution can be done.

c_max

The max arc size is defined as a module parameter, which can be viewed by following command

truenas# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    62277025792
truenas# cat /sys/module/zfs/parameters/zfs_arc_max
62277025792
truenas#

To justify this value, following command can be used, but it is not a persistent way.

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Suggestion from others

Many suggestions can be found, some of them maybe workable, for example

Create module option file

echo "options zfs zfs_arc_max=34359738368" > /etc/modprobe.d/zfs.conf

But they may not suitable for a NAS OS which can not be backed up using configuration backup provided by NAS OS.

  • The upgrade of OS can simply overwrite or delete the file
  • The file can be lost during OS rebuilting process.

Update sysctl (not workable)

Suggestion is update vfs.zfs.arc_max using sysctl, along with disable autotune, but it is only workable for kernel parameters, but no zfs parameters could be found, the zfs is loaded as module.

Implemenation

The parameter needs to be modified using TrueNAS web interface, to ensure that it will be saved during configuration export via System Settings => General => Manage Configuration => Download File.

So, following command is added into System Settings => Advanced => Init/Shutdown Scripts with When set to Post Init

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Verification

Verify the setting as below.

arc_summary | grep size

Note: The number is in bytes

Reduce the number

In order to reduce the number without reboot, following command needs to be executed to reduce the cache immediately

echo 3 > /proc/sys/vm/drop_caches

References

Why I cannot modify "vfs.zfs.arc_max" in WebUI?
QEMU / KVM: Using the Copy-On-Write mode

ZFS Concept

ZFS Concept

Pool

ZFS pool (Zpool) is a collection of one or more virtual devices (vdevs), vdev is a group of physical disks. They have following facts.

  • The redundancy level for vdevs can be a single drive, mirror, RAID-Z1, RAID-Z2, and RAID-Z3.
  • After creating a Zpool, it may not be possible to add additional disks to the vdev except mirrors.
  • Add additional vdevs to expand the Zpool is possible.
  • The storage space allocated to the Zpool cannot be decreased.
  • The drives in vdevs that are parts of the Zpool can be exchanged.

If there is a need to change the layout of the Zpool, the data should be backed up and the Zpool destroyed.

Datasets

Datasets is the space emulating a regular file system.

Datasets can be nested, which can possess different settings for snapshots, compression, deduplication and so on.

Volumes

Volumes (zvols) is the space emulating a block devices.

Data Integrity

No overwritten

The copy-on-write mechanism is to keep old data on the disk.

Checksum

Checksum information is written when data is written into disk, then verified when read data from disk. When checksum mismatch detected, use redundant data is used for correction.

Different checksum algorithms are used

  • Fletcher-based checksum
  • SHA-256 hash

ZFS RAID

  • Single - Zpool has a vdev consisting of a single disk, similar to RAID0.
  • Mirror – similar to RAID1.
  • RAIDZ1 – similar to RAID5 but without the write hole issue.
  • RAIDZ2 – similar to RAID6, with 2 disks redundancy.
  • RAIDZ3 – similar to RAID6, with 3 disks redundancy.

RAID write hole in a RAID5/RAID1 occurs when one of the member disks doesn't match the others and by the nature of single-redundant RAID5/RAID1 it is impossible to tell which of the disks is bad.

Errors

Checksum mismatch

ZFS is a self-healing system. If mismatched checksum is detected, ZFS tries to retrieve the data from other disks. If data correct, the system will amend the incorrect data and checksum.

Disk failure

If a disk in a Zpool fails, the pool is set to the degraded state, then data on the failed device is calculated and written to first the spare disk replaces the failed one. This is called resilvering. Once the restoration operation is complete, the status of the Zpool changes back to online. In case of when multiple disks have failed and if there are not enough redundant devices, the Zpool changes its state into unavailable.

Migrate to different system

In old system, export zpool, which unmounts Zpool’s datasets or zvols.

In new system, import zpool, which mount Zpool's datasets or zvols.

Maintenance

Scrubbing

The scrubbing is consistency check operation, and try to repair corrupted data.

No defragmentation

There is no online defragmentation in ZFS, so try to keep zpools below 70% utilization instead.

Copy-on-write

On ZFS, the data changes are stored on a different location than the original location on a disk and then the metadata is updated in that place on the disk. This mechanism guarantees that the old data is safely preserved in case of power loss or system crash that in other cases would result in loss of data.

Snapshots

The snapshot contains information about the original version of the file system to be retained. Snapshots do not require additional disk space within the pool. Once the data rendered in a snapshot is modified, the snapshot will take the disk space since it will now be pointing to the old data.

Clones

The clone is a writeable version of a snapshot. Overwriting the blocks in the cloned volume or file system results in decrementing the reference count on the previous block. The original snapshot that the clone is depending on, can not be deleted.

Rollback

Rollback command is to go back to a previous version of a dataset or a volume. Note that the rollback command cannot revert changes from other snapshots than the most recent one. If to do so, all intermediate snapshots will be automatically destroyed.

Promote

Promote command is to replace an existing volume with its clone.

References

ZFS Essentials – What is pooled storage?
ZFS Essentials – Copy-on-write & snapshots
ZFS Essentials – Data integrity & RAIDZ
RAID Recovery Guide