Tag: volume

Fix Synology `Allocation Status` Crashed Error

Fix Synology Allocation Status Crashed Error

I use JBOD for backup volume with checksum turned on, because I don't expect both data on source and backup date lost. The issue of one disk in JBOD volume can cause volume crash, which becomes read only. When checking the the status further, only one disk shows Allocation Status as Crashed but Health Status as Healthy.

In the pass, due to the faulty volume is in read only status, I need to create new folders with new names and copy all data into new folders, then rebuilt the disk array, and move the volume back to new created volume, which requires reconfiguration of permission and services too, such as NFS, Timemachine, Rsync, etc. It can take days to complete all these tasks.

This time, I tried to recover the volume using a few commands.

Steps

Recreate Array

  • Login into command line of Sysnology as root

  • Find the array

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[0] sdc5[2] sdb5[1]
      1943862912 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md12 : active raid5 sdjc7[5] sdjb7[6] sdjd7[3] sdja7[7] sdje7[8]
      1953467648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md9 : active raid5 sdjc6[9] sdjb6[8] sdja6[6] sdjd6[7] sdje6[5]
      703225088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md6 : active raid5 sdjc5[6] sdjd5[5] sdjb5[9] sdja5[8] sdje5[7]
      1230960384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md4 : active linear sdg3[0] sdh3[2](E) sdf3[1]
      2915921472 blocks super 1.2 64k rounding [3/3] [UUE]

md10 : active raid5 sdja8[2] sdje8[3] sdjc8[4]
      1953485824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md7 : active raid5 sdib6[4] sdie6[5] sdic6[3] sdia6[2] sdid6[1]
      3906971648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md3 : active raid5 sdie5[5] sdia5[4] sdid5[3] sdib5[7] sdic5[6]
      7794733824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md8 : active raid5 sdie7[0] sdib7[3] sdic7[2] sdia7[1]
      2930228736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sdh2[5] sdg2[4] sdf2[3] sdc2[2] sdb2[1] sda2[0]
      2097088 blocks [8/6] [UUUUUU__]

md0 : active raid1 sdh1[3] sdg1[4] sdf1[2] sda1[0] sdb1[1] sdc1[6]
      2490176 blocks [8/6] [UUUUU_U_]

unused devices: <none>
  • Collect RAID info
# mdadm --examine /dev/sdh3
/dev/sdh3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6783225a:318612f7:3473d58a:09a977b2
           Name : ds1812:4  (local to host ds1812)
  Creation Time : Wed Dec 28 07:04:52 2022
     Raid Level : linear
   Raid Devices : 3

 Avail Dev Size : 3897584768 (1858.51 GiB 1995.56 GB)
  Used Dev Size : 0
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=65 sectors
          State : clean
    Device UUID : 14704640:a5536257:40c4ae47:2f008c53

    Update Time : Sat Jan 21 00:36:29 2023
       Checksum : 8685d50c - correct
         Events : 5

       Rounding : 64K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
root@ds1812:~#
  • Unmount the system, if not successful, use force and kill option
# umount -f -k /volume3
  • Stop array
# mdadm --stop /dev/md4
  • Recreate array, answer the question as y
# mdadm --create --force /dev/md4 --metadata==1.2 --raid-devices=3 ---level=linear /dev/sdg3 /dev/sdf3 /dev/sdh3 -u6783225a:318612f7:3473d58a:09a977b2
mdadm: ... appears to be part of a raid array:
       ...
Continue creating array? y

Now, the array has been recreated, and should be in correct state

# cat /proc/mdstat

Check the filesystem and mount it again

The filesystem type is btrfs, so use following command to verify it

# btrfsck /dev/md4
Syno caseless feature on.
Checking filesystem on /dev/md4
UUID: 7a3a3941-e0c4-4505-8981-d309fb9482a5
checking extents
checking free space tree
checking fs roots
checking csums
checking root refs
found 2037124587520 bytes used err is 0
total csum bytes: 1986978456
total tree bytes: 2458648576
total fs tree bytes: 62947328
total extent tree bytes: 50741248
btree space waste bytes: 294577149
file data blocks allocated: 6689106694144
 referenced 1995731652608
root@ds1812:/# echo $?
0

Mount the filesystem, now, the Synology error beep should be stopped

mount /volume3

References

How to handle a drive that has "Allocation Status: Crashed"
[HOWTO] repair a clean volume who stays crashed volume
mdadm(8) — Linux manual page
Manualy repair filesystem command line DS214
How to recover from BTRFS errors

Renumber storage pools and volumes in Synology NAS

Renumber storage pools and volumes in Synology NAS

Story

For me, memorizing is a big issue, especially for logicless items. If it is anti-logic environment, I would make many mistakes which causes huge headache.

Numbering in Synology NAS is an issue for me, I got one volume2 but in storage pool 1, the volume1 is in storage pool 2. Normally, my thinking is simple, all packages are installed in volume1 and all iSCSI LUN created in volume1 as well, because I got SSD cache for volume1.

But above configuration confused me when ever received notification, I need to think about which volume got issue because the notification mentioned storage pool instead.

Today, thinking about change storage pool name again, because I know it is a setting hold by Synology, not Linux OS. Then I got answer.

Warning

Luckily I got issue with my DSM6, not DSM7, because they said that this can not be done in DSM7.

Renumber storage pool

Read storage pool number

# synospace --meta -e
[/dev/vg1/volume_1]
---------------------
Descriptions=[]
Reuse Space ID=[]
[/dev/vg1]
---------------------
Descriptions=[]
Reuse Space ID=[reuse_2]

Above result shows device /dev/vg1 is numbered as Storage Pool 2

Set number

To set storage pool number for specific device, use following command

# synospace --meta -s -i reuse_{storage_pool_number} {device_name}

Change volume number

Note: This one, I haven't tested. But if it works, then I might want to try to shink volume next time

Stop services

Stop all docker containers, etc., then stop all services using following command

syno_poweroff_task -d

list LV

lvm lvscan

rename LV

lvm lvrename {VG name} {old LV name} {new LV name}

Reboot

reboot

Shared folders and iSCSI services should be automatically modified and checking all you services are running correctly.

References

Renaming/renumbering storage pools and volumes
Synology Rename Volume and Storage Pool

Synology Volume Low Capacity Notification

Synology Volume Low Capacity Notification

DSM 6

In DSM 6, notification can only set as global value

  • Control Panel => Notifications => Advanced => Internal Storage

  • Click on Low Capacity of Volume, then define the Warning and Critical space thresholds.

DSM 7

In DSM 7, notification can be defined at individual volume level.

  • Storage Manager
  • Click the three dots at the top right corner of the desired volume
  • Select Settings
  • Scroll down to Low Capacity Notification and set thresholds

References

Adjusting Alert Thresholds