Day: October 3, 2021

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Consideration

FreeNAS/TrueNAS Core is using FreeBSD OS, which doesn't support docker and KVM, it uses bhyve as Hyperviser. In order to use docker, requires VM to be installed, such as Rancher OS VM, which is an overhead of the system.

TrueNAS Scale is developed under Debian, and which is still under beta version. In order to use more features under virtualization, TrueNAS Scale is considered to be used.

Reinstall TrueNAS Scale

Installation of TrueNAS Scale is slower than TrueNAS Core, network configuration is different too.

Network

Network aggregation configuration for failover, could not select active and stand by interface. Need to find out more on this configuration.

Import TureNAS Core storage

ZFS pool can be installed easily.

Rename zpool name and dataset

To change the pool name, can use shell to import and export before use GUI import. Following commands can be used

zpool import pool_old pool_new
zpool export pool_new
zpool import pool_new
zfs rename old_name new_name
zpool export pool_new

The BSD hypervisor (bhyve) Basic

The BSD hypervisor (bhyve) Basic

The BSD hypervisor, bhyve, pronounced "beehive" is a hypervisor/virtual machine manager available on FreeBSD, macOS, and Illumos.

FreeNASĀ® VMs use the bhyve(8) virtual machine software. This type of virtualization requires an Intel processor with Extended Page Tables (EPT) or an AMD processor with Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT).

To verify that an Intel processor has the required features, use Shell to run grep VT-x /var/run/dmesg.boot. If the EPT and UG features are shown, this processor can be used with bhyve.

To verify that an AMD processor has the required features, use Shell to run grep POPCNT /var/run/dmesg.boot. If the output shows the POPCNT feature, this processor can be used with bhyve.

References

The BSD hypervisor
FreeNAS VM

SSD Cache Basic

SSD Cache Basic

Consideration

SSD cache helps accessing same set of files frequently. But if the system just holding media files, most likely they won't be visited again, then cache doesn't help.

RAM is required for SSD cache, but adding RAM is more directly impact the perform, because there is no duplication between harddisk and SSD.

Synolog

The required amount of RAM is calculated before cache created.

One SSD disk only can be configured for one volume in read-only mode.

Two or more SSD disks can be configured as one raid, for one volume in read-write mode.

Currently, one SSD disk or raid can not be partitioned for different volume.

FreeNAS/TrueNAS (Untested)

Others suggest 64GB or more of RAM before adding a cache, otherwise, will slow the system down if add a cache with 16GB RAM.

Fusion disk could be another choise because the SSD can be used as storage as well, no waste of space.

Fusion Pools Basic

Fusion Pools Basic

Fusion Pools are also known as ZFS Allocation Classes, ZFS Special vdevs, and Metadata vdevs.

vdev

A special vdev can store meta data such as file locations and allocation tables. The allocations in the special class are dedicated to specific block types. By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks. This is a great use case for high performance but smaller sized solid-state storage. Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.

Creating a Fusion Pool

Go to Storage > Pools, click ADD, and select Create new pool.

A pool must always have one normal (non-dedup/special) vdev before other devices can be assigned to the special class. Configure the Data VDevs, then click ADD VDEV and select Metadata.

Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.

Using a Mirror layout is possible, but it is strongly recommended to keep the layout identical to the other vdevs. If the special vdev fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.

When more than one metadata vdev is created, then allocations are load-balanced between all these devices. If the special class becomes full, then allocations spill back into the normal class.

After the fusion pool is created, the Status shows a Special section with the metadata SSDs.

Auto TRIM allows TrueNAS to periodically check the pool disks for storage blocks that can be reclaimed. This can have a performance impact on the pool, so the option is disabled by default. For more details about TRIM in ZFS, see the autotrim property description in zpool.8.

References

Fusion Pools