Author: Bian Xi

Create timemachine share in TrueNAS

Create timemachine share in TrueNAS

Note: I got issue suddenly after sometimes, and spent hours to fix it. End up, I switched back to Synology NAS as timemachine. Check Update section.

Tried many hours on setting up time machine backup in TrueNAS Scale, encountered many issues. To save time next time, record important steps and parameters for for next setup.

Create dataset

Create a dataset called timemachine under zpool pool1. Set ACL Type of dataset to POSIX.

Create user

Creating new user/group is not a good way if the NAS is also used for other user from same machine at same time. This is because the TrueNAS will use both user to login, which is hard to troubleshoot and confusing. If do not create new user, the ownership/permission of sharing dataset is also hard to decide. To use dedicated user for timemachine, following steps can be used.

Create a new named as user id tm with group tm. It will create a sub-dataset in ZFS under timemachine dataset. Change home directory of tm to the dataset created in Create dataset section.

The Auxiliary Groups have a group named as builtin_users. Beware of this group, it will appear later in this post again.

Note: If the backup folder was copied from other system, change the owner/group to tm:builtin_users, and permission to 775 to avoid permssion issue.

Set permission of dataset

Strip ACL

This is most easy way with lesser headache. Remove ACL if ACL is wong or just use traditional UNIX permission system.

Use ACL

Change owner and group to both tm.

The dataset permission set to POSIX_HOME, this will enable owner and group both have read/write/execute permission, others will be read only.

Create share

In TrueNAS, the actual implementation for time machine done by option in SMB sharing, named Multi-user time machine, so select Multi-user time machine as Purpose, otherwise, Time Machine option will not be set and it is unchangable.

Beware of option Time Machine is selected, and Path Suffix is set to %U, both are unchangable. The %U means, a sub-dataset will be created under shared dataset for each user, they will not share the same view.

The individual host to be backed up will create a folder <hostname>.sparsebundle, which will be shown as disk in MacOS.

File/Folder permission

Remove ACL

If got strange issue on permission, uncheck ACL in SMB sharing, and then restart SMB service.

Note: The error can be verified by using Windows access share drive.

Workable permission

The actual file created in sub-dataset, will be under builtin_users group with permission 770, not tm for unknown reason. Sample output is shown below.

truenas# ls -la /mnt/pool2/timemachine/tm
total 70
drwxrwx---+ 3 tm tm               4 Oct  3 22:32 .
drwxrwxr-x+ 6 tm tm               6 Oct  3 21:23 ..
-rwxrwx---+ 1 tm builtin_users 6148 Oct  3 21:32 .DS_Store
drwxrwx---+ 4 tm builtin_users   10 Oct  3 23:02 shark.sparsebundle
truenas#

Update

Suddently, timemachine on TrueNAS stopped working, error message shows macOS could not find server.

I tried to manually map SMB drive, but failed too. I could not access the timemachine shared folder, but rest of shared folders are accessible.

mDNS

During the troubleshooting, found that macOS could not see TrueNAS in Finder, the issue could be related to mDNS missing.

Finial settings

The following settings make timemachine work again.

  • Purpose: Multi-user time machine
  • Enable ACL
  • Browsable to Network Clients
  • Time Machine (Can not change)
  • Legacy AFP compatibility
  • Enable Shadow Copies
  • Enable Alternate Data Streams
  • Enable SMB2/3 Durable Handles

But the the shared folder still requires to be opened before macOS Time Machine can see it. So, mDNS issue is still there.

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Consideration

FreeNAS/TrueNAS Core is using FreeBSD OS, which doesn't support docker and KVM, it uses bhyve as Hyperviser. In order to use docker, requires VM to be installed, such as Rancher OS VM, which is an overhead of the system.

TrueNAS Scale is developed under Debian, and which is still under beta version. In order to use more features under virtualization, TrueNAS Scale is considered to be used.

Reinstall TrueNAS Scale

Installation of TrueNAS Scale is slower than TrueNAS Core, network configuration is different too.

Network

Network aggregation configuration for failover, could not select active and stand by interface. Need to find out more on this configuration.

Import TureNAS Core storage

ZFS pool can be installed easily.

Rename zpool name and dataset

To change the pool name, can use shell to import and export before use GUI import. Following commands can be used

zpool import pool_old pool_new
zpool export pool_new
zpool import pool_new
zfs rename old_name new_name
zpool export pool_new

The BSD hypervisor (bhyve) Basic

The BSD hypervisor (bhyve) Basic

The BSD hypervisor, bhyve, pronounced "beehive" is a hypervisor/virtual machine manager available on FreeBSD, macOS, and Illumos.

FreeNAS® VMs use the bhyve(8) virtual machine software. This type of virtualization requires an Intel processor with Extended Page Tables (EPT) or an AMD processor with Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT).

To verify that an Intel processor has the required features, use Shell to run grep VT-x /var/run/dmesg.boot. If the EPT and UG features are shown, this processor can be used with bhyve.

To verify that an AMD processor has the required features, use Shell to run grep POPCNT /var/run/dmesg.boot. If the output shows the POPCNT feature, this processor can be used with bhyve.

References

The BSD hypervisor
FreeNAS VM

SSD Cache Basic

SSD Cache Basic

Consideration

SSD cache helps accessing same set of files frequently. But if the system just holding media files, most likely they won't be visited again, then cache doesn't help.

RAM is required for SSD cache, but adding RAM is more directly impact the perform, because there is no duplication between harddisk and SSD.

Synolog

The required amount of RAM is calculated before cache created.

One SSD disk only can be configured for one volume in read-only mode.

Two or more SSD disks can be configured as one raid, for one volume in read-write mode.

Currently, one SSD disk or raid can not be partitioned for different volume.

FreeNAS/TrueNAS (Untested)

Others suggest 64GB or more of RAM before adding a cache, otherwise, will slow the system down if add a cache with 16GB RAM.

Fusion disk could be another choise because the SSD can be used as storage as well, no waste of space.

Fusion Pools Basic

Fusion Pools Basic

Fusion Pools are also known as ZFS Allocation Classes, ZFS Special vdevs, and Metadata vdevs.

vdev

A special vdev can store meta data such as file locations and allocation tables. The allocations in the special class are dedicated to specific block types. By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks. This is a great use case for high performance but smaller sized solid-state storage. Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.

Creating a Fusion Pool

Go to Storage > Pools, click ADD, and select Create new pool.

A pool must always have one normal (non-dedup/special) vdev before other devices can be assigned to the special class. Configure the Data VDevs, then click ADD VDEV and select Metadata.

Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.

Using a Mirror layout is possible, but it is strongly recommended to keep the layout identical to the other vdevs. If the special vdev fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.

When more than one metadata vdev is created, then allocations are load-balanced between all these devices. If the special class becomes full, then allocations spill back into the normal class.

After the fusion pool is created, the Status shows a Special section with the metadata SSDs.

Auto TRIM allows TrueNAS to periodically check the pool disks for storage blocks that can be reclaimed. This can have a performance impact on the pool, so the option is disabled by default. For more details about TRIM in ZFS, see the autotrim property description in zpool.8.

References

Fusion Pools

Clear checksum error in FreeNAS/TrueNAS

Clear checksum error in FreeNAS/TrueNAS

Identify error

Errors can be found in TrueNAS Storage section in web page, or use shell in web page, run zpool status -x command.

Sample error can be fond in following screen. There are two pools got error. The pool0 got two hard disks, first one got 154 checksum errors, second one got one data error.

  pool: pool0
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Sat Oct  2 17:39:46 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    pool0                                           ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/bf410fcf-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0   154
        gptid/bfcc498a-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0     0

errors: No known data errors

  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
config:

    NAME                                          STATE     READ WRITE CKSUM
    pool01                                        ONLINE       0     0     0
      gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1
errors: List of errors unavailable: permission denied

errors: 1 data errors, use '-v' for a list

For second error, impacted file can be found using zpool status -v command

root@truenas[~]# zpool status -v pool01
  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:23:22 with 1 errors on Sat Oct  2 21:53:02 2021
config:

        NAME                                          STATE     READ WRITE CKSUM
        pool01                                        ONLINE       0     0     0
          gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1

errors: Permanent errors have been detected in the following files:

        /mnt/pool01/download/file.1
root@truenas[~]#

Clear error

Run following command to clear the error

zpool clear <pool_name>

For the pool has data error, which has any file impacted. Delete or overwrite the file.

Then scrub the pool

zpool scrub <pool_name>

Replace disk

To replace disk, run following command, c0t0d2 is a new disk to replace c0t0d0

zpool replace c0t0d0 c0t0d2

If the disk replaced at same location, then run following command

zpool replace c0t0d0

Living a Happy Life

Living a Happy Life

Be happy, there are many ways to satisfy yourself, just listed them below.

Be positive

Things always have two views, positive and negative, thinking positively.

Target on small achievement

Big achievements accompany too many failures, they are always bulit up by many small achievements.

Find your work–life balance

Work for others is called work, work for yourself is called contribute.

Be creative

Lesser rules applied.

Accept imperfection

Nothing is the best.

Do what you love to do

Building hobbies

Spend wisely

Less worry later

Live in the moment

Don't worry too much about future

Helping others

Just say hi to others during exercise can help yourself forget tiredness.

Listening music and watching video

Bring you out of depression.

Be yourself

Don't always follow

Hang out with happy people

If impossible, go out and watching others

Spend time in nature

To forget whatever happened before.

Reminisce over happy memories

Don't try to recall sad things although they can't be forgotten.

Don't hope too much

Don't believe people said, just listen.

References

20 Secrets to Living a Happier Life

Install *Synology* NAS managed *Let’s Encrypt Certificate* in *NGINX*

Install Synology NAS managed Let's Encrypt Certificate in NGINX

Certificate Management

Synology NAS can be used for certificate management, and Let's Encrypt certificate can be exported as ZIP file used for NGINX HTTPS configuration.

  1. Go to Control Panel -> Security -> Certificate
  2. Select certificate to be exported
  3. Select Export Certificate from right click menu
  4. Save exported file

For existing certificates, can use right click -> renew option to renew.

Note: All domain in the certificates, must be resolved to current Synology NAS at port 80 and port 443, otherwise, certificate generation will be failed.

In downloaded ZIP file, following files can be found.

  • certs.pem
  • chain.pem
  • privkey.pem

NGINX configuration

  1. Concatenate cert.pem and chain.pem to cert-with-chain.pem (or fullchain.pem) file

  2. Copy cert-with-chain.pem and privkey.pem into NGNIX conf.d folder

  3. Verify NGINX configuration as below

ssl_certificate     conf.d/cert-with-chain.pem;
ssl_certificate_key conf.d/privkey.pem;
  1. Restart NGINX

Verification

Browser

The date of issue for new certificate should be displayed in certificate information window.

Command line

Following command can be used for verification

openssl s_client -connect <domain_name>:<port>

If got following error, concatenate chain.pem into cert.pem, because the full chain is required.

verify error:num=20:unable to get local issuer certificate
verify error:num=21:unable to verify the first certificate

References

How to install Let's Encrypt on Nginx