Category: freenas

Create timemachine share in TrueNAS

Create timemachine share in TrueNAS

Note: I got issue suddenly after sometimes, and spent hours to fix it. End up, I switched back to Synology NAS as timemachine. Check Update section.

Tried many hours on setting up time machine backup in TrueNAS Scale, encountered many issues. To save time next time, record important steps and parameters for for next setup.

Create dataset

Create a dataset called timemachine under zpool pool1. Set ACL Type of dataset to POSIX.

Create user

Creating new user/group is not a good way if the NAS is also used for other user from same machine at same time. This is because the TrueNAS will use both user to login, which is hard to troubleshoot and confusing. If do not create new user, the ownership/permission of sharing dataset is also hard to decide. To use dedicated user for timemachine, following steps can be used.

Create a new named as user id tm with group tm. It will create a sub-dataset in ZFS under timemachine dataset. Change home directory of tm to the dataset created in Create dataset section.

The Auxiliary Groups have a group named as builtin_users. Beware of this group, it will appear later in this post again.

Note: If the backup folder was copied from other system, change the owner/group to tm:builtin_users, and permission to 775 to avoid permssion issue.

Set permission of dataset

Strip ACL

This is most easy way with lesser headache. Remove ACL if ACL is wong or just use traditional UNIX permission system.


Change owner and group to both tm.

The dataset permission set to POSIX_HOME, this will enable owner and group both have read/write/execute permission, others will be read only.

Create share

In TrueNAS, the actual implementation for time machine done by option in SMB sharing, named Multi-user time machine, so select Multi-user time machine as Purpose, otherwise, Time Machine option will not be set and it is unchangable.

Beware of option Time Machine is selected, and Path Suffix is set to %U, both are unchangable. The %U means, a sub-dataset will be created under shared dataset for each user, they will not share the same view.

The individual host to be backed up will create a folder <hostname>.sparsebundle, which will be shown as disk in MacOS.

File/Folder permission

Remove ACL

If got strange issue on permission, uncheck ACL in SMB sharing, and then restart SMB service.

Note: The error can be verified by using Windows access share drive.

Workable permission

The actual file created in sub-dataset, will be under builtin_users group with permission 770, not tm for unknown reason. Sample output is shown below.

truenas# ls -la /mnt/pool2/timemachine/tm
total 70
drwxrwx---+ 3 tm tm               4 Oct  3 22:32 .
drwxrwxr-x+ 6 tm tm               6 Oct  3 21:23 ..
-rwxrwx---+ 1 tm builtin_users 6148 Oct  3 21:32 .DS_Store
drwxrwx---+ 4 tm builtin_users   10 Oct  3 23:02 shark.sparsebundle


Suddently, timemachine on TrueNAS stopped working, error message shows macOS could not find server.

I tried to manually map SMB drive, but failed too. I could not access the timemachine shared folder, but rest of shared folders are accessible.


During the troubleshooting, found that macOS could not see TrueNAS in Finder, the issue could be related to mDNS missing.

Finial settings

The following settings make timemachine work again.

  • Purpose: Multi-user time machine
  • Enable ACL
  • Browsable to Network Clients
  • Time Machine (Can not change)
  • Legacy AFP compatibility
  • Enable Shadow Copies
  • Enable Alternate Data Streams
  • Enable SMB2/3 Durable Handles

But the the shared folder still requires to be opened before macOS Time Machine can see it. So, mDNS issue is still there.

Clear checksum error in FreeNAS/TrueNAS

Clear checksum error in FreeNAS/TrueNAS

Identify error

Errors can be found in TrueNAS Storage section in web page, or use shell in web page, run zpool status -x command.

Sample error can be fond in following screen. There are two pools got error. The pool0 got two hard disks, first one got 154 checksum errors, second one got one data error.

  pool: pool0
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Sat Oct  2 17:39:46 2021

    NAME                                            STATE     READ WRITE CKSUM
    pool0                                           ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/bf410fcf-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0   154
        gptid/bfcc498a-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0     0

errors: No known data errors

  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.

    NAME                                          STATE     READ WRITE CKSUM
    pool01                                        ONLINE       0     0     0
      gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1
errors: List of errors unavailable: permission denied

errors: 1 data errors, use '-v' for a list

For second error, impacted file can be found using zpool status -v command

root@truenas[~]# zpool status -v pool01
  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
  scan: scrub repaired 0B in 00:23:22 with 1 errors on Sat Oct  2 21:53:02 2021

        NAME                                          STATE     READ WRITE CKSUM
        pool01                                        ONLINE       0     0     0
          gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1

errors: Permanent errors have been detected in the following files:


Clear error

Run following command to clear the error

zpool clear <pool_name>

For the pool has data error, which has any file impacted. Delete or overwrite the file.

Then scrub the pool

zpool scrub <pool_name>

Replace disk

To replace disk, run following command, c0t0d2 is a new disk to replace c0t0d0

zpool replace c0t0d0 c0t0d2

If the disk replaced at same location, then run following command

zpool replace c0t0d0

Fusion Pools Basic

Fusion Pools Basic

Fusion Pools are also known as ZFS Allocation Classes, ZFS Special vdevs, and Metadata vdevs.


A special vdev can store meta data such as file locations and allocation tables. The allocations in the special class are dedicated to specific block types. By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks. This is a great use case for high performance but smaller sized solid-state storage. Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.

Creating a Fusion Pool

Go to Storage > Pools, click ADD, and select Create new pool.

A pool must always have one normal (non-dedup/special) vdev before other devices can be assigned to the special class. Configure the Data VDevs, then click ADD VDEV and select Metadata.

Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.

Using a Mirror layout is possible, but it is strongly recommended to keep the layout identical to the other vdevs. If the special vdev fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.

When more than one metadata vdev is created, then allocations are load-balanced between all these devices. If the special class becomes full, then allocations spill back into the normal class.

After the fusion pool is created, the Status shows a Special section with the metadata SSDs.

Auto TRIM allows TrueNAS to periodically check the pool disks for storage blocks that can be reclaimed. This can have a performance impact on the pool, so the option is disabled by default. For more details about TRIM in ZFS, see the autotrim property description in zpool.8.


Fusion Pools

SSD Cache Basic

SSD Cache Basic


SSD cache helps accessing same set of files frequently. But if the system just holding media files, most likely they won't be visited again, then cache doesn't help.

RAM is required for SSD cache, but adding RAM is more directly impact the perform, because there is no duplication between harddisk and SSD.


The required amount of RAM is calculated before cache created.

One SSD disk only can be configured for one volume in read-only mode.

Two or more SSD disks can be configured as one raid, for one volume in read-write mode.

Currently, one SSD disk or raid can not be partitioned for different volume.

FreeNAS/TrueNAS (Untested)

Others suggest 64GB or more of RAM before adding a cache, otherwise, will slow the system down if add a cache with 16GB RAM.

Fusion disk could be another choise because the SSD can be used as storage as well, no waste of space.

Creating FreeNAS VM on ESXi

Creating FreeNAS VM on ESXi

ZFS advantages

Synology uses btrfs as filesystem, which lack of support for bad sector as it natively not supported by btrfs. ZFS could be the choise because of some advantages below.

Bad sector support

Although Synology also can hand bad sector, but btrfs doesn't. Not sure how Synology handle it, but bad sectors can cause Synology volumes crash, and then volumes will be in read only mode, reconfiguration required and taking time to move volumes out from impacted volumes.

Block level dedup

This is an intersting feature of ZFS. The btrfs dedup can be done by running scripts, but ZFS can do natively.



FreeNAS/TrueNAS has many feature and got full functions of NAS feature.


FreeNAS/TrueNAS doesn't support ARM CPU, and cheap ARM board, such as Raspberry PI 4, doesn't support SATA, and 4 SATA drives should be considered for normal NAS. So not intend to use ARM board for NAS.


Decided to try FreeNAS/TrueNAS using an old PC installed ESXi, which has 32GB RAM, 8 theads CPU, and 10GB Ethernet.

  1. Assign 8GB RAM and 2 theads to FreeNAS VM, set hot plug memory and CPU in order to increase memory and CPU dynamically.

    Note: Error occurred when add memory to VM. See post Error on adding hot memory to TrueNAS VM.

  2. Create RDM disk access hard disk directly to improve disk performance.

  3. Create VM network interface which supports 10GB.

  4. Create iSCSI disk to hold VM image, because RDM disk vmdk file can not be created in NFS.

Create iSCSI storage

Although the ESXi is managed by vCenter, but could not find the place to configure iSCSI device. So login to ESXi web interface, and configure iSCSI in Storage -> Adapters.

Note: Target is IQN, not name.

Configure network on ESXi

During the creation, ESXi shows an error that two network interfaces had detected on network used by iSCSI, so I removed second standby interface during iSCSI adapter creation, then put back again after creation completed.

Create RDM disk

Following instructions given by Raw Device Mapping for local storage (1017530), to create RDM disk.

  1. Open an SSH session to the ESXi host.

  2. Run this command to list the disks that are attached to the ESXi host:

    ls -l /vmfs/devices/disks
  3. From the list, identify the local device you want to configure as an RDM and copy the device name.

    Note: The device name is likely be prefixed with t10. and look similar to:

  4. To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:

    vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk

    For example:

    vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk

    Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.

  5. When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:

    • Right click the virtual machine you want to add an RDM disk to.
    • Click Edit Settings.
    • Click Add.
    • Select Hard Disk.
    • Select Use an existing virtual disk.
    • Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
    • Select the virtual SCSI controller you want to attach the disk to and click Next.
    • Click Finish.

You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.

Create VM on iSCSI storage

Create VM using following parameters

  • VM type should be FreeBSD 12 (64 bit)
  • Memory recommented 8GB (Configured 4GB at beginning, no issue found)
  • SCSI adapter should be LSI Logic Parallel, if not, harddisk can not be detected.
  • Network adapter should be VMXNET3 to support 10GB
  • Add RDM disk into VM (I have done it after server created)

Configure FreeNAS

Configure network in FreeNAS console, and configure pool, dataset, user, sharing, ACL in FreeNAS website.

Configuration in FreeNAS console

Configure FreeNAS network

Configure IP address

Configuration in FreeNAS website

Using browser to access IP configured in previous step


Configure DNS

Default route

Default route is added in static route as to gateway.


Configure timezone


Configure Pool by giving pool name pool01


Under Pool, add Dataset, such as download


Create user to be used later in ACL.


Create SMB sharing

Configure owner in ACL

Assign owner and group to user created above, and select mode as Restrict.

Delete Pool (if needed)

Select Export/Disconnect in setting of pool.


ESXi hangs

The biggest issue encountered is, ESXi hangs, complains one PCPU freezing. Will try to install usb drive directly to see whether problem only happens on ESXi.

Network speed

Fast as excepted

Compare with Synology DS2419+

They are not comparable, because DS2419+ has more disks in the volume.

  • Slower.
  • Stopped when flushing the disks

Compare with Synology DS1812+

They are not comparable, because DS1812+ has three slow disks with raid in the volume.

FreeNAS USB Drive Installation

FreeNAS USB Drive Installation

In order to fully utilize system by FreeNAS, also like to test whether similar hanging issue happened when directly installed on USB drive without ESXi, installation had been done with following steps.

Create on USB drive from ISO image

Creating USB drive on Mac using steps mentioned below.

Preparing the Media

Using rdiskX, which is raw device (not read-only device), will be faster as mentioned in the instructions.

dd if=FreeNAS-9.3-RELEASE-x64.iso of=/dev/rdisk1 bs=64k

Boot from USB drive

Boot from the USB drive created above, and another USB drive will be used for installation.

Select BIOS mode

By choosing BIOS instead of UEFI, the PC bios could not set as auto boot from USB drive, but it can be chosen for manual boot. So choose UEFI mode instead.

Secure Boot in PC bios also requires to be set to Other OS instead of Windows UEFI, otherwise, following error will occurre.

System found unauthorized changes on the firmware error...

Configure network

To set aggragation mode, two original interfaces which had configured, will not be displayed in aggragation menu.

IP address will be configured on aggragation interface.

FreeNAS vs Synology

FreeNAS vs Synology


Devices: ds1812+ and ds2419+


  • Hardware are very stable (more than 10 years without issue)
  • Low power and low noise
  • Reasonable price
  • Mix size hard disks in volume
  • Upgrade Hard disk easily
  • Identify hard disk easily
  • Crashed volume in read-only mode, data can be retrieved
  • Many apps can be downloaded
  • Operations on NAS are organized user friendly


  • Cannot move or copy share folder after volume crashed, manual copy and resetup required
  • Bad hard disk can cause extension unit disconnected from main unit
  • Doesn't accept bad hard disk which smart testing failed, shows failling HDD list
  • Create many special folder named as @eaDir in everywhere, which can be issue when using some services, and huge number of small files in it.
    Note: This folder creation feature could not be disabled.
  • Dedup can not be handled
  • CPU is weak for virtual machines


Just started on an i7 PC with 32GB ram.


  • Opensource
  • Can be installed in a normal PC
  • Hardware upgrade is easy, and can import disks used in other NAS before
  • Insensitive to bad hard disk
  • ZFS can handle bad sector natively
  • ZFS can perform dedup natively (haven't tested)


  • Not easy to understand the tasks to be performed
  • Network configuration screens are everywhere, not easy to find them
  • Network aggregation configuration isn't easy to understand
  • Disk, pool, and dataset are highly related to ZFS
  • Share folder permission and ACL are too complex for NAS operation