Category: Computer

Computer is miraculous!

Rsync Basic

Rsync Basic

rsync a directory to a new directory with different name

A trailing slash on the source avoids creating an additional directory level at the destination.

rsync -a src/ dest

You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name".

Show progress

rsync -a -P src dest
rsync -a --progress src dest

Location of core files in Linux

Location of core files in Linux

Some core files are in the executable file running directory, some core files are in system directory, depending on the system configuration.

Filename

By default, the core file name is core only, different OSs change it's name.

Software

Abrt

Abrt stores core files in /var/cache/abrt.

Apport

Apport stores core files in /var/crash.

Systemd

Systemd updated /proc/sys/kernel/core_pattern as below.

$ cat /proc/sys/kernel/core_pattern
|/usr/lib/systemd/systemd-coredump %p %u %g %s %t %e

OS

TrueNAS Scale

The core files in Linux can be found in /var/db/system/cores/, they can be removed if no debugging required.

Fedora

Fedora store core files in /var/spool/abrt/ instead

Archlinux

Archlinux stores core files in /var/lib/systemd/coredump/

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Consideration

FreeNAS/TrueNAS Core is using FreeBSD OS, which doesn't support docker and KVM, it uses bhyve as Hyperviser. In order to use docker, requires VM to be installed, such as Rancher OS VM, which is an overhead of the system.

TrueNAS Scale is developed under Debian, and which is still under beta version. In order to use more features under virtualization, TrueNAS Scale is considered to be used.

Reinstall TrueNAS Scale

Installation of TrueNAS Scale is slower than TrueNAS Core, network configuration is different too.

Network

Network aggregation configuration for failover, could not select active and stand by interface. Need to find out more on this configuration.

Import TureNAS Core storage

ZFS pool can be installed easily.

Rename zpool name and dataset

To change the pool name, can use shell to import and export before use GUI import. Following commands can be used

zpool import pool_old pool_new
zpool export pool_new
zpool import pool_new
zfs rename old_name new_name
zpool export pool_new

The BSD hypervisor (bhyve) Basic

The BSD hypervisor (bhyve) Basic

The BSD hypervisor, bhyve, pronounced "beehive" is a hypervisor/virtual machine manager available on FreeBSD, macOS, and Illumos.

FreeNAS® VMs use the bhyve(8) virtual machine software. This type of virtualization requires an Intel processor with Extended Page Tables (EPT) or an AMD processor with Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT).

To verify that an Intel processor has the required features, use Shell to run grep VT-x /var/run/dmesg.boot. If the EPT and UG features are shown, this processor can be used with bhyve.

To verify that an AMD processor has the required features, use Shell to run grep POPCNT /var/run/dmesg.boot. If the output shows the POPCNT feature, this processor can be used with bhyve.

References

The BSD hypervisor
FreeNAS VM

Fusion Pools Basic

Fusion Pools Basic

Fusion Pools are also known as ZFS Allocation Classes, ZFS Special vdevs, and Metadata vdevs.

vdev

A special vdev can store meta data such as file locations and allocation tables. The allocations in the special class are dedicated to specific block types. By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks. This is a great use case for high performance but smaller sized solid-state storage. Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.

Creating a Fusion Pool

Go to Storage > Pools, click ADD, and select Create new pool.

A pool must always have one normal (non-dedup/special) vdev before other devices can be assigned to the special class. Configure the Data VDevs, then click ADD VDEV and select Metadata.

Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.

Using a Mirror layout is possible, but it is strongly recommended to keep the layout identical to the other vdevs. If the special vdev fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.

When more than one metadata vdev is created, then allocations are load-balanced between all these devices. If the special class becomes full, then allocations spill back into the normal class.

After the fusion pool is created, the Status shows a Special section with the metadata SSDs.

Auto TRIM allows TrueNAS to periodically check the pool disks for storage blocks that can be reclaimed. This can have a performance impact on the pool, so the option is disabled by default. For more details about TRIM in ZFS, see the autotrim property description in zpool.8.

References

Fusion Pools

SSD Cache Basic

SSD Cache Basic

Consideration

SSD cache helps accessing same set of files frequently. But if the system just holding media files, most likely they won't be visited again, then cache doesn't help.

RAM is required for SSD cache, but adding RAM is more directly impact the perform, because there is no duplication between harddisk and SSD.

Synolog

The required amount of RAM is calculated before cache created.

One SSD disk only can be configured for one volume in read-only mode.

Two or more SSD disks can be configured as one raid, for one volume in read-write mode.

Currently, one SSD disk or raid can not be partitioned for different volume.

FreeNAS/TrueNAS (Untested)

Others suggest 64GB or more of RAM before adding a cache, otherwise, will slow the system down if add a cache with 16GB RAM.

Fusion disk could be another choise because the SSD can be used as storage as well, no waste of space.

Creating FreeNAS VM on ESXi

Creating FreeNAS VM on ESXi

ZFS advantages

Synology uses btrfs as filesystem, which lack of support for bad sector as it natively not supported by btrfs. ZFS could be the choise because of some advantages below.

Bad sector support

Although Synology also can hand bad sector, but btrfs doesn't. Not sure how Synology handle it, but bad sectors can cause Synology volumes crash, and then volumes will be in read only mode, reconfiguration required and taking time to move volumes out from impacted volumes.

Block level dedup

This is an intersting feature of ZFS. The btrfs dedup can be done by running scripts, but ZFS can do natively.

Decision

Software

FreeNAS/TrueNAS has many feature and got full functions of NAS feature.

Hardware

FreeNAS/TrueNAS doesn't support ARM CPU, and cheap ARM board, such as Raspberry PI 4, doesn't support SATA, and 4 SATA drives should be considered for normal NAS. So not intend to use ARM board for NAS.

Consolution

Decided to try FreeNAS/TrueNAS using an old PC installed ESXi, which has 32GB RAM, 8 theads CPU, and 10GB Ethernet.

  1. Assign 8GB RAM and 2 theads to FreeNAS VM, set hot plug memory and CPU in order to increase memory and CPU dynamically.

    Note: Error occurred when add memory to VM. See post Error on adding hot memory to TrueNAS VM.

  2. Create RDM disk access hard disk directly to improve disk performance.

  3. Create VM network interface which supports 10GB.

  4. Create iSCSI disk to hold VM image, because RDM disk vmdk file can not be created in NFS.

Create iSCSI storage

Although the ESXi is managed by vCenter, but could not find the place to configure iSCSI device. So login to ESXi web interface, and configure iSCSI in Storage -> Adapters.

Note: Target is IQN, not name.

Configure network on ESXi

During the creation, ESXi shows an error that two network interfaces had detected on network used by iSCSI, so I removed second standby interface during iSCSI adapter creation, then put back again after creation completed.

Create RDM disk

Following instructions given by Raw Device Mapping for local storage (1017530), to create RDM disk.

  1. Open an SSH session to the ESXi host.

  2. Run this command to list the disks that are attached to the ESXi host:

    ls -l /vmfs/devices/disks
  3. From the list, identify the local device you want to configure as an RDM and copy the device name.

    Note: The device name is likely be prefixed with t10. and look similar to:

    t10.F405E46494C4540046F455B64787D285941707D203F45765
  4. To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:

    vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk

    For example:

    vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk

    Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.

  5. When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:

    • Right click the virtual machine you want to add an RDM disk to.
    • Click Edit Settings.
    • Click Add.
    • Select Hard Disk.
    • Select Use an existing virtual disk.
    • Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
    • Select the virtual SCSI controller you want to attach the disk to and click Next.
    • Click Finish.

You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.

Create VM on iSCSI storage

Create VM using following parameters

  • VM type should be FreeBSD 12 (64 bit)
  • Memory recommented 8GB (Configured 4GB at beginning, no issue found)
  • SCSI adapter should be LSI Logic Parallel, if not, harddisk can not be detected.
  • Network adapter should be VMXNET3 to support 10GB
  • Add RDM disk into VM (I have done it after server created)

Configure FreeNAS

Configure network in FreeNAS console, and configure pool, dataset, user, sharing, ACL in FreeNAS website.

Configuration in FreeNAS console

Configure FreeNAS network

Configure IP address

Configuration in FreeNAS website

Using browser to access IP configured in previous step

DNS

Configure DNS

Default route

Default route is added in static route as 0.0.0.0 to gateway.

NTP

Configure timezone

Pool

Configure Pool by giving pool name pool01

Dataset

Under Pool, add Dataset, such as download

User

Create user to be used later in ACL.

Sharing

Create SMB sharing

Configure owner in ACL

Assign owner and group to user created above, and select mode as Restrict.

Delete Pool (if needed)

Select Export/Disconnect in setting of pool.

Result

ESXi hangs

The biggest issue encountered is, ESXi hangs, complains one PCPU freezing. Will try to install usb drive directly to see whether problem only happens on ESXi.

Network speed

Fast as excepted

Compare with Synology DS2419+

They are not comparable, because DS2419+ has more disks in the volume.

  • Slower.
  • Stopped when flushing the disks

Compare with Synology DS1812+

They are not comparable, because DS1812+ has three slow disks with raid in the volume.