Category: truenas

Migrate TrueNAS VM to Proxmox VM

Migrate TrueNAS VM to Proxmox VM

After Proxmox installed, I also migrate TrueNAS VM to Proxmox as VM for both Ubuntu VM and Windows 10 VM.

Copy zvol to a file and transfer to Proxmox server

The zpool volume device is located in /dev/zvol/<zpool_name>/<zvol_name>. Create disk image using following command.

dd if=/dev/zvol/pool0/server-xxxxxx of=/tmp/server.raw bs=8m
scp ...

Another way to transfer

dd if=/dev/zvol/.... bs=8192 status=progress | ssh root@proxmox 'dd of=....raw bs=8192'

or

dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'dd of=....raw.gz bs=8192'

or

dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'gunzip - | dd of=....raw.gz bs=8192'

Transfer raw file into Proxmox server

Create Proxmox VM

  • Create a VM with OVMF (UEFI) if TrueNAS VM is using UEFI

  • Remove VM disk

  • Use following command to import disk

qm importdisk <vm_id> <raw_file> <storage_id>

For example

qm importdisk 100 vm.raw ds1812-vm_nfs1
  • Go to VM hardware page

  • Select unused disk and click Add button to add disk into VM

    • For Linux, select SCSI as controller
    • For Windows, select SATA
  • Select Options => Boot Order to check the iscsi/sata controller

Boot TrueNAS VM

References

Export Virtual Machine from TrueNAS and Import VM to Proxmox
Migration of servers to Proxmox VE
Additional ways to migrate to Proxmox VE

TrueNAS k3s-server uses more than 10% CPU

TrueNAS k3s-server uses more than 10% CPU

The k3s supposed to be the lightweight k8s, but it used more than 10% when no container running. The problem was reported when TrueNAS running as VM.

This issue also caused high disk utilization, which caused whole system slow, huge IO wait.

Unset Pool

There is a zpool called ix-applications created for k3s. To stop k3s process, needs to unset pool in Apps => Settings => Unset Pool.

Result

After Unset Pool, the CPU utilization dropped from 70% to 5%.

References

k3s-server uses 10% CPU for no reason

Migrate USB boot TrueNAS to Proxmox VM

Migrate USB boot TrueNAS to Proxmox VM

After Proxmox installed, I also migrate TrueNAS to Proxmox as VM

Create disk image

Use dd command to copy USB drive to a disk file

dd if=/dev/sdi of=/tmp/truenas.raw bs=10m

Create Proxmox VM

  • Create a VM with SeaBIOS

  • Remove VM disk

  • Use following command to import disk

qm importdisk <vm_id> <raw_file> <storage_id>

For example

qm importdisk 100 vm.raw ds1812-vm_nfs1
  • Go to VM hardware page

  • Select unused disk and click Add button to add disk into VM

  • Select Options => Boot Order to check the iscsi controller

  • List all disks and find out the disks like to passthru

lsblk -o +MODEL,SERIAL,VENDOR
  • Find out device path
ls /dev/disk/by-id/*<SERIAL_NUM>*
  • Import storage disks as passthru devices
qm set 100 -scsi2 <device>

Boot TrueNAS VM

Change network configuration

  • Interface with IP address
  • Bridge network interface

References

How to run TrueNAS on Proxmox?
Export Virtual Machine from TrueNAS and Import VM to Proxmox

Enable 2FA for TrueNAS Core

Enable 2FA for TrueNAS Core

The 2FA in TrueNAS Core uses pam_oath.so module, supports Two-factor time based (TOTP) SSH authentication.

Setup

Enable 2FA

  • Go to Credentials => 2FA

  • Click on Enable Two-Factor Authentication

  • Click on Show QR, use Authy to record it. This is token for root account.

  • Save

Test GUI

Use another browser login with user name, password, and pin code (Authy generated).

Make sure it is working.

Enable SSH

  • Go to Credentials => 2FA

  • Select Enable Two-Factor Auth for SSH

  • Save

Enable root login

  • Go to System Settings => Services

  • Select Configure button, which is a pencil icon

  • Check Log in as Root with Password

  • Save

Test root login with 2FA

Use terminal

$ ssh host.example.com
Password: 
One-time password (OATH) for 'root':
Linux truenas.bx.net 5.10.70+truenas #1 SMP Wed Nov 3 18:30:34 UTC 2021 x86_64

Test root login successful.

Disable root login

  • Go to System Settings => Services

  • Select Configure button, which is a pencil icon

  • Uncheck Log in as Root with Password

  • Save

Setup for normal user

After enable 2FA, normal user can not login, got error in /var/log/auth.log as below:

error: PAM: User not known to the underlying authentication module for ...

Use these steps to enable 2FA for user.

Note: If you lost SSH connection, the root shell can be accessed from GUI, System Settings => Shell

Generate a random code

# head -10 /dev/urandom | md5sum | cut -b 1-30
15ad027b56c81672214f4659ffb432

Get oath configuration file name

The usersfile name can be found using following command:

# grep oath /etc/pam.d/sshd
auth    required    pam_oath.so    usersfile=/etc/users.oath    window=0

Update /etc/users.oath

Setup the oath seed in /etc/users.oath:

HOTP/T30/6  user    -   15ad027b56c81672214f4659ffb432

Install oathtool

Use another linux server, such as ubuntu server:

ubuntu# apt install oathtool

I chose another server, because TrueNAS server is not fully customized debian server, better don't change it structure and packages.

Test pin code for SSH

Open another terminal, and run following command, and run second command in the linux server when prompting OATH code.

$ ssh host.example.com
Password: 
One-time password (OATH) for 'user':

Now, quickly run following command,

ubuntu# oathtool --totp -v 15ad027b56c81672214f4659ffb432
960776

Input OATH code in SSH login terminal. The code should be accepted.

Get Base32 secret

In the previous ubuntu server, install qrencode package

ubuntu# apt install qrencode

Run following command to collect Base32 secret:

ubuntu# oathtool --totp -v 15ad027b56c81672214f4659ffb432
Hex secret: 15ad027b56c81672214f4659ffb432
Base32 secret: CWWQE62WZALHEIKPIZM77NBS
...
329770

Generate QR code

qrencode -t ansiutf8 "otpauth://totp/user@host.example.com?secret=CWWQE62WZALHEIKPIZM77NBS"

Save into Authy

Use Authy scan QR code, then type in TrueNAS in textbox to search icon, then save it.

Persistent change

As TrueNAS is a fully customized OS, it has startup process to regenerate /etc/users.oath file, results only root id stays.

In order to overcome this issue, create a startup command in System Settings => Advanced => Init/Shutdown Scripts, add following command:

Name: Append oath codes
When: POSTINIT
Command:

echo "HOTP/T30/6\t<user_name>\t-\t<user_code>" >> /etc/users.oath

Note: There are many ways to archive this, such as backup users.oath files you created, and restore it. I just chose the most easy and maintenance free way.

TrueNAS GUI

I could not find any place to setup in TrueNAS GUI for user, and the user id I created in TrueNAS can not login to GUI at all. In fact, TrueNAS doesn't support normal user login to GUI.

Possible enhancements

There are the limitations of pam_oath.o implemenation

Only one usersfile

Only one usersfile can be specified in pam_oath.o, there are some suggestions, such as:

  • Enhance source code to allow pam_oath.o accepts %h as usersfile parameter's value to point to user's home directory.

Missing entry allowed

If the user is not in usersfile, then they can not login, this makes administrator very busy.

I like one example implementation as below:

WARNING: I didn't test the following codes which downloaed from Two-Factor Authentication with OTP (Admin Guide), just for reference.

  • Create a group called otpusers, the users are not in this group do not require 2FA. This implemented in PAM
auth [success=2 default=ignore] pam_succeed_if.so uid = 0                        # skip 2 lines for root
auth [success=1 default=ignore] pam_succeed_if.so user notingroup otpusers       # ignore users not yet in otpusers
auth requisite pam_oath.so usersfile=/var/security/auth/users.oath window=20     # accept one of 20 consecutive keys 
 (in case clocks of user and server are out of sync)
 ```

* Create profile script to check whether the user is in `otpusers` group, if not, create oath code and allow user save it.

*WARNING: The below script that I copied from Internet got syntax error, and I didn't test it as well.*

`/etc/profile.d/create_secret.sh`:

/bin/bash

RRZK, 2015-12-10 (CO)

OATH_FILE="/var/security/auth/users.oath"
OTPGROUP="otpusers"

ME=$(/usr/bin/whoami)

ME=${PAM_USER}

HOST=${HOSTNAME}

RET=0
/usr/bin/id -Gn ${ME}|/bin/grep ${OTPGROUP} >/dev/null 2>&1
RET=$?

if [ ! ${ME} = "root" ] && [ ${RET} -ne 0 ]; then

Disable CTRL-C

trap '' 2

/bin/echo -e "

Hello ${ME}

I will generate a TOTP (time based) OATH Secret for you...
"

generate secret

/bin/echo "... generating secret"
SECRET=$(/usr/bin/head -10 /dev/urandom | /usr/bin/sha512sum | /bin/cut -b 19-50)

generate base32 secret

/bin/echo "... generating base32 secret"
BASE32=$(/usr/bin/oathtool --totp -v ${SECRET}|/bin/grep 'Base32'|/bin/awk '{print $NF}')

generate qrcode

/bin/echo "... generating qrcode"
/usr/bin/qrencode -l H -v 1 --background=FFFFFF -o ${ME}_oath.png "otpauth://totp/${ME}@${HOST}?secret=${BASE32}"

insert secret in oath database

/bin/echo "... adding secret to oath database"
/bin/echo "... adding user to otpuser group"

TMPFILE=$(/bin/mktemp ) || exit 1
/bin/echo -e "HOTP/T30/6\t${ME}\t-\t${SECRET}" > $TMPFILE
/usr/bin/sudo -u root /usr/local/sbin/add_secret.sh ${TMPFILE} ${OTPGROUP} ${ME}
/bin/rm -f TMPFILE

/bin/echo "... finished"
echo "Secret: ${SECRET}
BASE32 Secret:${BASE32}" > ${ME}_oath.dat

/bin/echo "
Your Secret is: ${SECRET}
Your BASE32 Secret is ${BASE32}
Your QR-Code is: ${ME}_oath.png

Enter your secret in your OTP Token (enter BASE32 without the trailing '=')
or
Display this file and scan it with your OTP Token APP. (X11Forward only)
"
/bin/echo "To display your QR-Code, press "
read INPUT
if [ "$INPUT" = "d" ]; then
/usr/bin/display ${ME}_oath.png
fi

logout
fi


* Then add oath code into *usersfile*.

*WARNING: The below script that I copied from Internet got syntax error, and I didn't test it as well.*

`/usr/local/sbin/add_secret.sh`:

/bin/bash

RRZK, 2015-12-10 (CO)

OATH_FILE=/var/security/auth/users.oath

TMPFILE=$1
OTPGROUP=$2
USER=$3

/bin/cat ${TMPFILE} >> ${OATH_FILE}
/usr/sbin/usermod -a -G ${OTPGROUP} ${USER}
exit 0



## References

[pam_oath](https://wiki.archlinux.org/title/Pam_oath)
[Two-factor time based (TOTP) SSH authentication with pam_oath and Google Authenticator](https://spod.cx/blog/two-factor-ssh-auth-with-pam_oath-google-authenticator.shtml)
[How to Create QR Codes From the Linux Command Line](https://www.cloudsavvyit.com/8382/how-to-create-qr-codes-from-the-linux-command-line/)
[How to generate a QR Code for Google Authenticator that correctly shows Issuer displayed above the OTP?](https://stackoverflow.com/questions/34520928/how-to-generate-a-qr-code-for-google-authenticator-that-correctly-shows-issuer-d)
[Enable user to login to webui](https://www.truenas.com/community/threads/unable-to-login-to-gui-with-non-admin-root-user.19921/)
[Two-Factor Authentication with OTP (Admin Guide)](https://hpc-wiki.info/hpc/Admin_Guide_Two-Factor_Authentication_with_OTP)
[sshd: How to enable PAM authentication for specific users under](https://serverfault.com/questions/222637/sshd-how-to-enable-pam-authentication-for-specific-users-under)

Create timemachine share in TrueNAS

Create timemachine share in TrueNAS

Note: I got issue suddenly after sometimes, and spent hours to fix it. End up, I switched back to Synology NAS as timemachine. Check Update section.

Tried many hours on setting up time machine backup in TrueNAS Scale, encountered many issues. To save time next time, record important steps and parameters for for next setup.

Create dataset

Create a dataset called timemachine under zpool pool1. Set ACL Type of dataset to POSIX.

Create user

Creating new user/group is not a good way if the NAS is also used for other user from same machine at same time. This is because the TrueNAS will use both user to login, which is hard to troubleshoot and confusing. If do not create new user, the ownership/permission of sharing dataset is also hard to decide. To use dedicated user for timemachine, following steps can be used.

Create a new named as user id tm with group tm. It will create a sub-dataset in ZFS under timemachine dataset. Change home directory of tm to the dataset created in Create dataset section.

The Auxiliary Groups have a group named as builtin_users. Beware of this group, it will appear later in this post again.

Note: If the backup folder was copied from other system, change the owner/group to tm:builtin_users, and permission to 775 to avoid permssion issue.

Set permission of dataset

Strip ACL

This is most easy way with lesser headache. Remove ACL if ACL is wong or just use traditional UNIX permission system.

Use ACL

Change owner and group to both tm.

The dataset permission set to POSIX_HOME, this will enable owner and group both have read/write/execute permission, others will be read only.

Create share

In TrueNAS, the actual implementation for time machine done by option in SMB sharing, named Multi-user time machine, so select Multi-user time machine as Purpose, otherwise, Time Machine option will not be set and it is unchangable.

Beware of option Time Machine is selected, and Path Suffix is set to %U, both are unchangable. The %U means, a sub-dataset will be created under shared dataset for each user, they will not share the same view.

The individual host to be backed up will create a folder <hostname>.sparsebundle, which will be shown as disk in MacOS.

File/Folder permission

Remove ACL

If got strange issue on permission, uncheck ACL in SMB sharing, and then restart SMB service.

Note: The error can be verified by using Windows access share drive.

Workable permission

The actual file created in sub-dataset, will be under builtin_users group with permission 770, not tm for unknown reason. Sample output is shown below.

truenas# ls -la /mnt/pool2/timemachine/tm
total 70
drwxrwx---+ 3 tm tm               4 Oct  3 22:32 .
drwxrwxr-x+ 6 tm tm               6 Oct  3 21:23 ..
-rwxrwx---+ 1 tm builtin_users 6148 Oct  3 21:32 .DS_Store
drwxrwx---+ 4 tm builtin_users   10 Oct  3 23:02 shark.sparsebundle
truenas#

Update

Suddently, timemachine on TrueNAS stopped working, error message shows macOS could not find server.

I tried to manually map SMB drive, but failed too. I could not access the timemachine shared folder, but rest of shared folders are accessible.

mDNS

During the troubleshooting, found that macOS could not see TrueNAS in Finder, the issue could be related to mDNS missing.

Finial settings

The following settings make timemachine work again.

  • Purpose: Multi-user time machine
  • Enable ACL
  • Browsable to Network Clients
  • Time Machine (Can not change)
  • Legacy AFP compatibility
  • Enable Shadow Copies
  • Enable Alternate Data Streams
  • Enable SMB2/3 Durable Handles

But the the shared folder still requires to be opened before macOS Time Machine can see it. So, mDNS issue is still there.

Add self-signed certificate for TrueNAS

Add self-signed certificate for TrueNAS

To use self-signed certificate in TrueNAS, following steps are required.

Add Certificate into TrueNAS

  • Select Credentials -> Certificates
  • In Certificates section, click on Add button
  • In Add Certificate window, give a name, and select Import Certificate
  • In Extra Constraints section, cut and paste the contents of cert file and key file into Certificate and Private Key textboxes

Configure GUI certificate

  • Select System Settings -> General
  • In GUI section, click on Settings button
  • In GUI Settings window, select the certificate to be used in GUI SSL Certificate option
  • Click on Save button

Restart

Restart UI web server, which is done automatically.

Refresh browser, need to click reload botton.

The most insane issue with TrueNAS

The most insane issue with TrueNAS

This morning, I saw the login screen of my TrueNAS, so decided to have a look. After login, the TrueNAS rebooted...

This is really a design issue, both shutdown and reboot are not well designed, the URL can be reused without any warning prompt.

In fact, I knew this issue, but only careful enough just after reboot or shutdown performed. After yesterday's reboot, I didn't try to login using UI URL.

Although I careful enough, this issue leads me avoid using Back button of browser, because the URL can be in history.

The solution can be very easy, just change GET method to POST method in both reboot and shutdown pages with addition variable. But when will they make such change as it is already a mature product for years.

Change max arc size on TrueNAS SCALE

Change max arc size on TrueNAS SCALE

After upgrade memory to 64GB, the memory usage is less than 32GB even run two VMs together. To utilize all memory, increase zfs cache size is one of the solution can be done.

c_max

The max arc size is defined as a module parameter, which can be viewed by following command

truenas# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    62277025792
truenas# cat /sys/module/zfs/parameters/zfs_arc_max
62277025792
truenas#

To justify this value, following command can be used, but it is not a persistent way.

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Suggestion from others

Many suggestions can be found, some of them maybe workable, for example

Create module option file

echo "options zfs zfs_arc_max=34359738368" > /etc/modprobe.d/zfs.conf

But they may not suitable for a NAS OS which can not be backed up using configuration backup provided by NAS OS.

  • The upgrade of OS can simply overwrite or delete the file
  • The file can be lost during OS rebuilting process.

Update sysctl (not workable)

Suggestion is update vfs.zfs.arc_max using sysctl, along with disable autotune, but it is only workable for kernel parameters, but no zfs parameters could be found, the zfs is loaded as module.

Implemenation

The parameter needs to be modified using TrueNAS web interface, to ensure that it will be saved during configuration export via System Settings => General => Manage Configuration => Download File.

So, following command is added into System Settings => Advanced => Init/Shutdown Scripts with When set to Post Init

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Verification

Verify the setting as below.

arc_summary | grep size

Note: The number is in bytes

Reduce the number

In order to reduce the number without reboot, following command needs to be executed to reduce the cache immediately

echo 3 > /proc/sys/vm/drop_caches

References

Why I cannot modify "vfs.zfs.arc_max" in WebUI?
QEMU / KVM: Using the Copy-On-Write mode

Error of txg_sync blocked for more than 120 seconds

Error of txg_sync blocked for more than 120 seconds

Following error was appearing in my dmesg monitoring screen.

txg_sync blocked for more than 120 seconds --> excessive load

If I'm not wrong, it could be caused by slow harddisk speed, because the TrueNAS zfs cache is about 61GB, can take longer time to flush back to hard disk.

Same as other filesystem, zfs has writeback caching (aka write-behind caching), which will flush data back to hard disk in specific interval. zfs has synchronous and asynchronous mode, they are a bit different that readonly, writethrough and writeback mode.

Except above, zfs has different behaviors on copy on write (COW) as below.

  • Always write to new block due to copy on write
  • Big file for random writing, such as VM disk file, can be fragmented
  • Can not reduce the write operation even if keep writing same block

Therefore, copy on write should be disabled for VM images. But if so, snapshot function could be lost.

Reference

Read-Through, Write-Through, Write-Behind Caching and Refresh-Ahead

TODO: Move dataset to another zpool in TrueNAS

Move dataset to another zpool in TrueNAS

In Synology, move share folder to another volume is quite easy, can be done via UI interface. In TrueNAS, I could not find such task can be selected.

Duplicate dataset from snapshot

The workable solution is utilize the zfs command to duplicate in SSH environment, then export old pool and import new one.

First make a snapshot poolX/dataset@initial, then use following command duplicate zfs dataset snapshot to new zpool.

zfs send poolX/dataset@initial | zfs recv -F poolY/dataset

Update new dataset

Then make another snapshot poolX/dataset@incremental, then use following command update zfs dataset snapshot to new zpool.

zfs send -i initial poolX/dataset@incremental | zfs recv poolY/dataset

Activate new dataset

To make the new dataset usable, rollback snapshot needs to be performed for new dataset.

Update share

Change shared point to use new pool.

Update client

This is only required if client used server filesystem structure, such as NFS.

References

Migrate to smaller disk
*Note: pv (Pipe Viewer) command is not installed in TrueNAS by default.