Creating FreeNAS VM on ESXi
Synology uses btrfs as filesystem, which lack of support for bad sector as it natively not supported by btrfs. ZFS could be the choise because of some advantages below.
Bad sector support
Although Synology also can hand bad sector, but btrfs doesn't. Not sure how Synology handle it, but bad sectors can cause Synology volumes crash, and then volumes will be in read only mode, reconfiguration required and taking time to move volumes out from impacted volumes.
Block level dedup
This is an intersting feature of ZFS. The btrfs dedup can be done by running scripts, but ZFS can do natively.
FreeNAS/TrueNAS has many feature and got full functions of NAS feature.
FreeNAS/TrueNAS doesn't support ARM CPU, and cheap ARM board, such as Raspberry PI 4, doesn't support SATA, and 4 SATA drives should be considered for normal NAS. So not intend to use ARM board for NAS.
Decided to try FreeNAS/TrueNAS using an old PC installed ESXi, which has 32GB RAM, 8 theads CPU, and 10GB Ethernet.
Assign 8GB RAM and 2 theads to FreeNAS VM, set hot plug memory and CPU in order to increase memory and CPU dynamically.
Note: Error occurred when add memory to VM. See post Error on adding hot memory to TrueNAS VM.
Create RDM disk access hard disk directly to improve disk performance.
Create VM network interface which supports 10GB.
Create iSCSI disk to hold VM image, because RDM disk vmdk file can not be created in NFS.
Create iSCSI storage
Although the ESXi is managed by vCenter, but could not find the place to configure iSCSI device. So login to ESXi web interface, and configure iSCSI in Storage -> Adapters.
Note: Target is IQN, not name.
Configure network on ESXi
During the creation, ESXi shows an error that two network interfaces had detected on network used by iSCSI, so I removed second standby interface during iSCSI adapter creation, then put back again after creation completed.
Create RDM disk
Following instructions given by Raw Device Mapping for local storage (1017530), to create RDM disk.
Open an SSH session to the ESXi host.
Run this command to list the disks that are attached to the ESXi host:
ls -l /vmfs/devices/disks
From the list, identify the local device you want to configure as an RDM and copy the device name.
Note: The device name is likely be prefixed with t10. and look similar to:
To configure the device as an RDM and output the RDM pointer file to your chosen destination, run this command:
vmkfstools -z /vmfs/devices/disks/diskname /vmfs/volumes/datastorename/vmfolder/vmname.vmdk
vmkfstools -z /vmfs/devices/disks/t10.F405E46494C4540046F455B64787D285941707D203F45765 /vmfs/volumes/Datastore2/localrdm1/localrdm1.vmdk
Note: The size of the newly created RDM pointer file appears to be the same size and the Raw Device it it mapped to, this is a dummy file and is not consuming any storage space.
When you have created the RDM pointer file, attach the RDM to a virtual machine using the vSphere Client:
- Right click the virtual machine you want to add an RDM disk to.
- Click Edit Settings.
- Click Add.
- Select Hard Disk.
- Select Use an existing virtual disk.
- Browse to the directory you saved the RDM pointer to in step 5 and select the RDM pointer file and click Next.
- Select the virtual SCSI controller you want to attach the disk to and click Next.
- Click Finish.
You should now see your new hard disk in the virtual machine inventory as Mapped Raw LUN.
Create VM on iSCSI storage
Create VM using following parameters
- VM type should be FreeBSD 12 (64 bit)
- Memory recommented 8GB (Configured 4GB at beginning, no issue found)
- SCSI adapter should be LSI Logic Parallel, if not, harddisk can not be detected.
- Network adapter should be VMXNET3 to support 10GB
- Add RDM disk into VM (I have done it after server created)
Configure network in FreeNAS console, and configure pool, dataset, user, sharing, ACL in FreeNAS website.
Configuration in FreeNAS console
Configure FreeNAS network
Configure IP address
Configuration in FreeNAS website
Using browser to access IP configured in previous step
Default route is added in static route as 0.0.0.0 to gateway.
Configure Pool by giving pool name
Under Pool, add Dataset, such as
Create user to be used later in ACL.
Create SMB sharing
Configure owner in ACL
Assign owner and group to user created above, and select mode as
Delete Pool (if needed)
Export/Disconnect in setting of pool.
The biggest issue encountered is, ESXi hangs, complains one PCPU freezing. Will try to install usb drive directly to see whether problem only happens on ESXi.
Fast as excepted
Compare with Synology DS2419+
They are not comparable, because DS2419+ has more disks in the volume.
- Stopped when flushing the disks
Compare with Synology DS1812+
They are not comparable, because DS1812+ has three slow disks with raid in the volume.