Exporting swap space on TrueNAS
There are quite number of swap partitions on TrueNAS. In short, swap are mirror partitions on /dev/sdX1, which is 2GB for each mirror, data partitions are on /dev/sdX2.
As the top information, it has not been used, so just leave it until performance impacted.
Total Space
Following screen shows 4GB swap space in top
command
top - 16:31:05 up 5:26, 4 users, load average: 11.79, 11.36, 10.90
Tasks: 540 total, 1 running, 538 sleeping, 0 stopped, 1 zombie
%Cpu(s): 0.8 us, 9.4 sy, 0.0 ni, 46.1 id, 43.1 wa, 0.0 hi, 0.5 si, 0.0 st
MiB Mem : 32052.4 total, 13522.1 free, 17935.6 used, 594.7 buff/cache
MiB Swap: 4096.0 total, 4096.0 free, 0.0 used. 13739.1 avail Mem
Devices
There are two partitions as swap
truenas# swapon
NAME TYPE SIZE USED PRIO
/dev/dm-0 partition 2G 0B -2
/dev/dm-1 partition 2G 0B -3
truenas#
Partitions
DM info, shows /dev/dm-0 and /dev/dm-1 map to md127 and md126.
truenas# dmsetup ls
md127 (253:0)
md126 (253:1)
truenas# dmsetup info /dev/dm-0
Name: md127
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 2
Event number: 0
Major, minor: 253, 0
Number of targets: 1
UUID: CRYPT-PLAIN-md127
truenas# dmsetup info /dev/dm-1
Name: md126
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 2
Event number: 0
Major, minor: 253, 1
Number of targets: 1
UUID: CRYPT-PLAIN-md126
truenas#
MD info
Total 4 partitions involve, mirror into 2 raid1 devices.
Reported by proc
truenas# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : active raid1 sde1[1] sdd1[0]
2097152 blocks super non-persistent [2/2] [UU]
md127 : active raid1 sdc1[1] sdb1[0]
2097152 blocks super non-persistent [2/2] [UU]
unused devices: <none>
truenas#
Reported by mdadm
truenas# mdadm --detail /dev/md127
/dev/md127:
Version :
Creation Time : Wed Oct 6 11:08:56 2021
Raid Level : raid1
Array Size : 2097152 (2.00 GiB 2.15 GB)
Used Dev Size : 2097152 (2.00 GiB 2.15 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
truenas# mdadm --detail /dev/md126
/dev/md126:
Version :
Creation Time : Wed Oct 6 11:08:57 2021
Raid Level : raid1
Array Size : 2097152 (2.00 GiB 2.15 GB)
Used Dev Size : 2097152 (2.00 GiB 2.15 GB)
Raid Devices : 2
Total Devices : 2
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
truenas#
Block device info
Structure of partitions
truenas# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.4T 0 disk
├─sda1 8:1 0 2G 0 part
└─sda2 8:2 0 1.4T 0 part
sdb 8:16 0 7.3T 0 disk
├─sdb1 8:17 0 2G 0 part
│ └─md127 9:127 0 2G 0 raid1
│ └─md127 253:0 0 2G 0 crypt [SWAP]
└─sdb2 8:18 0 7.3T 0 part
sdc 8:32 0 298.1G 0 disk
├─sdc1 8:33 0 2G 0 part
│ └─md127 9:127 0 2G 0 raid1
│ └─md127 253:0 0 2G 0 crypt [SWAP]
└─sdc2 8:34 0 296.1G 0 part
sdd 8:48 0 232.9G 0 disk
├─sdd1 8:49 0 2G 0 part
│ └─md126 9:126 0 2G 0 raid1
│ └─md126 253:1 0 2G 0 crypt [SWAP]
└─sdd2 8:50 0 230.9G 0 part
sde 8:64 0 298.1G 0 disk
├─sde1 8:65 0 2G 0 part
│ └─md126 9:126 0 2G 0 raid1
│ └─md126 253:1 0 2G 0 crypt [SWAP]
└─sde2 8:66 0 296.1G 0 part
sdf 8:80 1 14.9G 0 disk
├─sdf1 8:81 1 1M 0 part
├─sdf2 8:82 1 512M 0 part
└─sdf3 8:83 1 14.4G 0 part
zd0 230:0 0 20G 0 disk
truenas#
zpool structure
List all zpool
truenas# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
boot-pool 14G 3.70G 10.3G - - 2% 26% 1.00x ONLINE -
pool0 296G 9.13G 287G - - 1% 3% 1.00x ONLINE /mnt
pool1 1.36T 383G 1009G - - 16% 27% 1.09x ONLINE /mnt
pool2 7.27T 1.12T 6.14T - - 2% 15% 1.10x ONLINE /mnt
pool3 230G 2.63G 227G - - 0% 1% 1.00x ONLINE /mnt
truenas#
For individual pool, can use following command find out partition info
truenas# zpool status pool0 -v
pool: pool0
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: resilvered 600K in 00:00:03 with 0 errors on Mon Oct 4 04:49:30 2021
config:
NAME STATE READ WRITE CKSUM
pool0 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
bf410fcf-2209-11ec-b8aa-001132dbfc9c ONLINE 0 0 0
bfcc498a-2209-11ec-b8aa-001132dbfc9c ONLINE 0 0 0
errors: No known data errors
truenas#
find partition by id
zpool list doesn't have partition name, only got partition id, use following command to find out mapping partition id.
truenas# ls -l /dev/disk/by-partuuid
total 0
lrwxrwxrwx 1 root root 10 Oct 6 11:05 0e8d0027-65cc-4fa5-bb68-7d91668ca1f4 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 41ba87d7-2137-11ec-9c17-001132dbfc9c -> ../../sdb1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 41cbb8fa-2137-11ec-9c17-001132dbfc9c -> ../../sdb2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 5626c0ae-2137-11ec-9c17-001132dbfc9c -> ../../sdd1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 563bbde1-2137-11ec-9c17-001132dbfc9c -> ../../sdd2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 672278c8-92bc-4e99-8158-25e53eb085c9 -> ../../sdf2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 757ce69e-207a-11ec-afcf-005056a390b2 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 75827da1-207a-11ec-afcf-005056a390b2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 bf3063db-2209-11ec-b8aa-001132dbfc9c -> ../../sdc1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 bf410fcf-2209-11ec-b8aa-001132dbfc9c -> ../../sdc2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 bfb5835e-2209-11ec-b8aa-001132dbfc9c -> ../../sde1
lrwxrwxrwx 1 root root 10 Oct 6 11:05 bfcc498a-2209-11ec-b8aa-001132dbfc9c -> ../../sde2
lrwxrwxrwx 1 root root 10 Oct 6 11:05 e384f2ee-96dd-4b1b-ac68-8fe14ea92797 -> ../../sdf3
truenas#