Tag: cache

Change max arc size on TrueNAS SCALE

Change max arc size on TrueNAS SCALE

After upgrade memory to 64GB, the memory usage is less than 32GB even run two VMs together. To utilize all memory, increase zfs cache size is one of the solution can be done.

c_max

The max arc size is defined as a module parameter, which can be viewed by following command

truenas# grep c_max /proc/spl/kstat/zfs/arcstats
c_max                           4    62277025792
truenas# cat /sys/module/zfs/parameters/zfs_arc_max
62277025792
truenas#

To justify this value, following command can be used, but it is not a persistent way.

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Suggestion from others

Many suggestions can be found, some of them maybe workable, for example

Create module option file

echo "options zfs zfs_arc_max=34359738368" > /etc/modprobe.d/zfs.conf

But they may not suitable for a NAS OS which can not be backed up using configuration backup provided by NAS OS.

  • The upgrade of OS can simply overwrite or delete the file
  • The file can be lost during OS rebuilting process.

Update sysctl (not workable)

Suggestion is update vfs.zfs.arc_max using sysctl, along with disable autotune, but it is only workable for kernel parameters, but no zfs parameters could be found, the zfs is loaded as module.

Implemenation

The parameter needs to be modified using TrueNAS web interface, to ensure that it will be saved during configuration export via System Settings => General => Manage Configuration => Download File.

So, following command is added into System Settings => Advanced => Init/Shutdown Scripts with When set to Post Init

echo 60129542144 > /sys/module/zfs/parameters/zfs_arc_max

Verification

Verify the setting as below.

arc_summary | grep size

Note: The number is in bytes

Reduce the number

In order to reduce the number without reboot, following command needs to be executed to reduce the cache immediately

echo 3 > /proc/sys/vm/drop_caches

References

Why I cannot modify "vfs.zfs.arc_max" in WebUI?
QEMU / KVM: Using the Copy-On-Write mode

ZFS cache and log

ZFS cache and log

There are two kinds of cache, read cache and write cache.

Read cache

Called ARC and L2ARC.

ARC (Adaptive Replacement Cache)

In memory, caching the information that would require in the near future, while discarding the ones that will be needed furthest ahead in time.

This can be set using kernel/module parameter, such as zfs_arc_max.

L2ARC (Level 2 ARC)

In cache device, extension of ARC. Can be created using following command

zpool add tank cache ada3

Note: tank is the pool name, ada3 is the block device used for caching

Write cache

Called ZIL (ZFS Intent Log).

Asynchronous

By default, ZFS will cache write data in memory before write to disk, this is called asynchronous mode.

Synchronous

Synchronous will make sure data written to disk before continue, this can be set using following command

zfs set sync=always mypool/dataset1

ZFS Intent Log (ZIL)

This is the temporary space to store data before write into main disks, this can be used to speed up write operation. The write operation is considered as completed once data written into ZIL device, which is called SLOG (Separate Intent Log) devices, can be defined as follow

zpool add tank log ada3

Note: tank is the pool name, ada3 is the block device used for slog

If worrying SLOG device faulty, it can be mirrored too.

zpool add tank log mirror ada3 ada4

References

Configuring ZFS Cache for High Speed IO
ZFS Performance with Databases (Cached)

SSD Cache Basic

SSD Cache Basic

Consideration

SSD cache helps accessing same set of files frequently. But if the system just holding media files, most likely they won't be visited again, then cache doesn't help.

RAM is required for SSD cache, but adding RAM is more directly impact the perform, because there is no duplication between harddisk and SSD.

Synolog

The required amount of RAM is calculated before cache created.

One SSD disk only can be configured for one volume in read-only mode.

Two or more SSD disks can be configured as one raid, for one volume in read-write mode.

Currently, one SSD disk or raid can not be partitioned for different volume.

FreeNAS/TrueNAS (Untested)

Others suggest 64GB or more of RAM before adding a cache, otherwise, will slow the system down if add a cache with 16GB RAM.

Fusion disk could be another choise because the SSD can be used as storage as well, no waste of space.

TODO: Synology SSD Cache Issues

Synology SSD Cache Issues

Synology SSD Cache have two issues as below

  • Unable to use one disk/array to support mulitiple volume.

    • No answer from Internet and some people mentioned that it is a new request.
    • Possible solution is to create partition/volume on SSD Storage Pool, then use volume as cache device.
    • Synology uses LVM cache, haven't checked whether native linux can do or not.
  • Utilization of cache is very low, about 5GB on fequently used volume, such as volume1.

    • Improved in DSM 7 which supports Pin all Btrfs metadata option. But haven't validated the utilization.