Tag: truenas

Error replace hard disk in zpool in TrueNAS

Error replace hard disk in zpool in TrueNAS

Got following error when trying to replace hard disk in zpool. Reboot is required.

middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda

Partition exists

First issue with the partition which exists in the old hard disk. Use fdisk to remove all partitions. But still could not replace.

Use force option

Then click on force check box, the replacing was started, but stopped at 15%. Tried many times, but still failed. Search google, people got same issue, but they said sudden worked.

Run partprob

Run partprob, error shows the kernel didn't know the new partition table, reboot is required.

Check partition after reboot

After reboot, checked partition table, found TrueNAS had updated partition as others, which has one 2GB swap. Then force replace hard disk in pool again, then worked

Conclution

This is TrueNAS bug, which didn't close devices in kernel before repartition hard disk, this caused partition is opened and could not reread the new partition table into kernel.

Solution

Reboot

References

Cant create Pool on TrueNAS Scale (it does work on TrueNAS Core under same Hardware)
Cant create Pool on TrueNas Scale

Clear checksum error in FreeNAS/TrueNAS

Clear checksum error in FreeNAS/TrueNAS

Identify error

Errors can be found in TrueNAS Storage section in web page, or use shell in web page, run zpool status -x command.

Sample error can be fond in following screen. There are two pools got error. The pool0 got two hard disks, first one got 154 checksum errors, second one got one data error.

  pool: pool0
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
    attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
    using 'zpool clear' or replace the device with 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Sat Oct  2 17:39:46 2021
config:

    NAME                                            STATE     READ WRITE CKSUM
    pool0                                           ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/bf410fcf-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0   154
        gptid/bfcc498a-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0     0

errors: No known data errors

  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
config:

    NAME                                          STATE     READ WRITE CKSUM
    pool01                                        ONLINE       0     0     0
      gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1
errors: List of errors unavailable: permission denied

errors: 1 data errors, use '-v' for a list

For second error, impacted file can be found using zpool status -v command

root@truenas[~]# zpool status -v pool01
  pool: pool01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:23:22 with 1 errors on Sat Oct  2 21:53:02 2021
config:

        NAME                                          STATE     READ WRITE CKSUM
        pool01                                        ONLINE       0     0     0
          gptid/75827da1-207a-11ec-afcf-005056a390b2  ONLINE       0     0     1

errors: Permanent errors have been detected in the following files:

        /mnt/pool01/download/file.1
root@truenas[~]#

Clear error

Run following command to clear the error

zpool clear <pool_name>

For the pool has data error, which has any file impacted. Delete or overwrite the file.

Then scrub the pool

zpool scrub <pool_name>

Replace disk

To replace disk, run following command, c0t0d2 is a new disk to replace c0t0d0

zpool replace c0t0d0 c0t0d2

If the disk replaced at same location, then run following command

zpool replace c0t0d0

Baidu Disk download speed test

Baidu Disk download speed test

I used 4 environments to test download files from Baidu Disk.

iMac

CPU: Intel i7

This machine's download speed is quite fast, but when CPU gets high, fan will trun on, quite hot.

Windows 10 on Physical Machine

CPU: Intel Core 2 Duo

This machine is slowest one, less than 10MB/s, don't understand why, but looks CPU speed impact the download speed.

Windows 10 as VM on TrueNAS

CPU: Intel i7

This is fastest, can reach 30MB/s sometimes.

Ubuntu as VM on TrueNAS

CPU: Intel i7

This isn't fast, may be because the software got issue, speed is about 15MB/s.

Conclusion

Surprisingly, Windows 10 as VM running in TrueNAS is much faster.

VM setup in TrueNAS

VM setup in TrueNAS

Setup Bridge Network for HOST

If the VM interface created on physical interface, the VM will not be able to access host, which can not use any services provided by TrueNAS.

To fix this issue, Bridge Network is necessary to be used in host. To migrate physical network to bridge network, following steps required.

Note: complete all steps before click on Test Changes

  • Remove IP address from physical interface (bond0)
  • Create a bridge interface called br0, attach physical interface (bond0)
  • Add IP on bridge interface
  • Click on Test Changes
  • Wait till the IP address reachable again
  • Then Make change permanently by click the same button again.

Create VM

Select CPU Passthru should be faster.
Select vio devices for hard disk and network.

Download driver from Fedora

Download both storage driver and network driver from following website.
https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.139-1/virtio-win-0.1.139.iso

Load driver to detect hard disk

Select driver CD and point to \viostor\w10\amd64 folder for storage device driver.

Update driver after installation

For network card driver, update using driver CD.

References

10 Easy Steps To Install Windows 10 on Linux KVM – KVM Windows

Configure rsync in TrueNAS

Configure rsync in TrueNAS

Create user/group

Create a user called rsync, with group rsync.

Create dataset

Create a dataset, owned by rsync:rsync, with permission 770.

Enable rsync service

Go to System Settings -> Services, enable rsync.

Create module

Click on Configure (Edit icon), select Rsync Module tab, key in following info, and save configuration.

  • Module Name
  • Access Mode: Read & Write
  • User: rsync
  • Group: rsync

Test

Run following command from remote server

rsync -avR --password-file=/root/.rsync/password \
    /tmp \
    rsync@<rsync_host>::NetBackup/`hostname`

Exporting swap space on TrueNAS

Exporting swap space on TrueNAS

There are quite number of swap partitions on TrueNAS. In short, swap are mirror partitions on /dev/sdX1, which is 2GB for each mirror, data partitions are on /dev/sdX2.

As the top information, it has not been used, so just leave it until performance impacted.

Total Space

Following screen shows 4GB swap space in top command

top - 16:31:05 up  5:26,  4 users,  load average: 11.79, 11.36, 10.90
Tasks: 540 total,   1 running, 538 sleeping,   0 stopped,   1 zombie
%Cpu(s):  0.8 us,  9.4 sy,  0.0 ni, 46.1 id, 43.1 wa,  0.0 hi,  0.5 si,  0.0 st
MiB Mem :  32052.4 total,  13522.1 free,  17935.6 used,    594.7 buff/cache
MiB Swap:   4096.0 total,   4096.0 free,      0.0 used.  13739.1 avail Mem

Devices

There are two partitions as swap

truenas# swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-0 partition   2G   0B   -2
/dev/dm-1 partition   2G   0B   -3
truenas#

Partitions

DM info, shows /dev/dm-0 and /dev/dm-1 map to md127 and md126.

truenas# dmsetup ls
md127   (253:0)
md126   (253:1)
truenas# dmsetup info /dev/dm-0
Name:              md127
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        2
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: CRYPT-PLAIN-md127

truenas# dmsetup info /dev/dm-1
Name:              md126
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        2
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: CRYPT-PLAIN-md126

truenas#

MD info

Total 4 partitions involve, mirror into 2 raid1 devices.

Reported by proc

truenas# cat /proc/mdstat      
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md126 : active raid1 sde1[1] sdd1[0]
      2097152 blocks super non-persistent [2/2] [UU]

md127 : active raid1 sdc1[1] sdb1[0]
      2097152 blocks super non-persistent [2/2] [UU]

unused devices: <none>
truenas#

Reported by mdadm

truenas# mdadm --detail /dev/md127
/dev/md127:
           Version : 
     Creation Time : Wed Oct  6 11:08:56 2021
        Raid Level : raid1
        Array Size : 2097152 (2.00 GiB 2.15 GB)
     Used Dev Size : 2097152 (2.00 GiB 2.15 GB)
      Raid Devices : 2
     Total Devices : 2

             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
truenas# mdadm --detail /dev/md126
/dev/md126:
           Version : 
     Creation Time : Wed Oct  6 11:08:57 2021
        Raid Level : raid1
        Array Size : 2097152 (2.00 GiB 2.15 GB)
     Used Dev Size : 2097152 (2.00 GiB 2.15 GB)
      Raid Devices : 2
     Total Devices : 2

             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1
truenas#

Block device info

Structure of partitions

truenas# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda           8:0    0   1.4T  0 disk  
├─sda1        8:1    0     2G  0 part  
└─sda2        8:2    0   1.4T  0 part  
sdb           8:16   0   7.3T  0 disk  
├─sdb1        8:17   0     2G  0 part  
│ └─md127     9:127  0     2G  0 raid1 
│   └─md127 253:0    0     2G  0 crypt [SWAP]
└─sdb2        8:18   0   7.3T  0 part  
sdc           8:32   0 298.1G  0 disk  
├─sdc1        8:33   0     2G  0 part  
│ └─md127     9:127  0     2G  0 raid1 
│   └─md127 253:0    0     2G  0 crypt [SWAP]
└─sdc2        8:34   0 296.1G  0 part  
sdd           8:48   0 232.9G  0 disk  
├─sdd1        8:49   0     2G  0 part  
│ └─md126     9:126  0     2G  0 raid1 
│   └─md126 253:1    0     2G  0 crypt [SWAP]
└─sdd2        8:50   0 230.9G  0 part  
sde           8:64   0 298.1G  0 disk  
├─sde1        8:65   0     2G  0 part  
│ └─md126     9:126  0     2G  0 raid1 
│   └─md126 253:1    0     2G  0 crypt [SWAP]
└─sde2        8:66   0 296.1G  0 part  
sdf           8:80   1  14.9G  0 disk  
├─sdf1        8:81   1     1M  0 part  
├─sdf2        8:82   1   512M  0 part  
└─sdf3        8:83   1  14.4G  0 part  
zd0         230:0    0    20G  0 disk  
truenas# 

zpool structure

List all zpool

truenas# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool    14G  3.70G  10.3G        -         -     2%    26%  1.00x    ONLINE  -
pool0       296G  9.13G   287G        -         -     1%     3%  1.00x    ONLINE  /mnt
pool1      1.36T   383G  1009G        -         -    16%    27%  1.09x    ONLINE  /mnt
pool2      7.27T  1.12T  6.14T        -         -     2%    15%  1.10x    ONLINE  /mnt
pool3       230G  2.63G   227G        -         -     0%     1%  1.00x    ONLINE  /mnt
truenas#

For individual pool, can use following command find out partition info

truenas# zpool status pool0 -v
  pool: pool0
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
    The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: resilvered 600K in 00:00:03 with 0 errors on Mon Oct  4 04:49:30 2021
config:

    NAME                                      STATE     READ WRITE CKSUM
    pool0                                     ONLINE       0     0     0
      mirror-0                                ONLINE       0     0     0
        bf410fcf-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0     0
        bfcc498a-2209-11ec-b8aa-001132dbfc9c  ONLINE       0     0     0

errors: No known data errors
truenas#

find partition by id

zpool list doesn't have partition name, only got partition id, use following command to find out mapping partition id.

truenas# ls -l /dev/disk/by-partuuid
total 0
lrwxrwxrwx 1 root root 10 Oct  6 11:05 0e8d0027-65cc-4fa5-bb68-7d91668ca1f4 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 41ba87d7-2137-11ec-9c17-001132dbfc9c -> ../../sdb1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 41cbb8fa-2137-11ec-9c17-001132dbfc9c -> ../../sdb2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 5626c0ae-2137-11ec-9c17-001132dbfc9c -> ../../sdd1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 563bbde1-2137-11ec-9c17-001132dbfc9c -> ../../sdd2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 672278c8-92bc-4e99-8158-25e53eb085c9 -> ../../sdf2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 757ce69e-207a-11ec-afcf-005056a390b2 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 75827da1-207a-11ec-afcf-005056a390b2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 bf3063db-2209-11ec-b8aa-001132dbfc9c -> ../../sdc1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 bf410fcf-2209-11ec-b8aa-001132dbfc9c -> ../../sdc2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 bfb5835e-2209-11ec-b8aa-001132dbfc9c -> ../../sde1
lrwxrwxrwx 1 root root 10 Oct  6 11:05 bfcc498a-2209-11ec-b8aa-001132dbfc9c -> ../../sde2
lrwxrwxrwx 1 root root 10 Oct  6 11:05 e384f2ee-96dd-4b1b-ac68-8fe14ea92797 -> ../../sdf3
truenas#

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Migrate Storage from FreeNAS/TrueNAS Core to TrueNAS Scale

Consideration

FreeNAS/TrueNAS Core is using FreeBSD OS, which doesn't support docker and KVM, it uses bhyve as Hyperviser. In order to use docker, requires VM to be installed, such as Rancher OS VM, which is an overhead of the system.

TrueNAS Scale is developed under Debian, and which is still under beta version. In order to use more features under virtualization, TrueNAS Scale is considered to be used.

Reinstall TrueNAS Scale

Installation of TrueNAS Scale is slower than TrueNAS Core, network configuration is different too.

Network

Network aggregation configuration for failover, could not select active and stand by interface. Need to find out more on this configuration.

Import TureNAS Core storage

ZFS pool can be installed easily.

Rename zpool name and dataset

To change the pool name, can use shell to import and export before use GUI import. Following commands can be used

zpool import pool_old pool_new
zpool export pool_new
zpool import pool_new
zfs rename old_name new_name
zpool export pool_new

The BSD hypervisor (bhyve) Basic

The BSD hypervisor (bhyve) Basic

The BSD hypervisor, bhyve, pronounced "beehive" is a hypervisor/virtual machine manager available on FreeBSD, macOS, and Illumos.

FreeNAS® VMs use the bhyve(8) virtual machine software. This type of virtualization requires an Intel processor with Extended Page Tables (EPT) or an AMD processor with Rapid Virtualization Indexing (RVI) or Nested Page Tables (NPT).

To verify that an Intel processor has the required features, use Shell to run grep VT-x /var/run/dmesg.boot. If the EPT and UG features are shown, this processor can be used with bhyve.

To verify that an AMD processor has the required features, use Shell to run grep POPCNT /var/run/dmesg.boot. If the output shows the POPCNT feature, this processor can be used with bhyve.

References

The BSD hypervisor
FreeNAS VM