Author: Bian Xi

Install Synology CA Certificate into Linux OS

Install Synology CA Certificate into Linux OS

To trust Synology self generated CA in Linux OS, following steps can be used.

Export Synology CA Certificates from NAS

  • Launch Control Panel => Security
  • Click on Certificate tab
  • Click on Add button
  • Select the certificate named as synology
  • Select Explore certificate, then Next

There will be 4 files in the downloaded ZIP file

cert.pem
privkey.pem
syno-ca-cert.pem
syno-ca-privkey.pem

Copy the Synology CA certificate

Copy file syno-ca-cert.pem to server folder and rename it to .crt

cp syno-ca-cert.pem /usr/local/share/ca-certificates/syno-ca-cert.crt
update-ca-certificates

Note: the certificate file name must be .crt

Restart service

For any services used certificate generated by Synology CA certificate, restart the service

systemctl restart <service>

Test CA

Use openssl command

Run following commands

openssl s_client -connect server_address:443 -CAfile /usr/local/share/ca-certificates/syno-ca-cert.crt
openssl s_client -connect server_address:443 -CApath /etc/ssl/certs

Should return 0 (ok)

Verify return code: 0 (ok)

Use curl command

curl --verbose <URL> --cacert /usr/local/share/ca-certificates/syno-ca-cert.crt
curl --verbose <URL>

References

Lost network after PVE rebooted

Lost network after PVE rebooted

Error

After reboot of PVE, network interfaces detected, but no link activated, ip address command shows all physical interfaces are down, and interfaces LED lights are shut off when loading OS.

Getting permission denied error when run ifup command, when using python3 /usr/sbin/ifup -a command, getting error as another instance of this application is already running

After using strace python3 /usr/sbin/ifup -a command, found that the command tried to access folder /run/network, but it doesn't exist.

Solution

Create folder /run/network after rebooted, then run command python3 /usr/sbin/ifup -a to bring up network manually.

Note: This is only a temporary solution, because the folder /run/network will disappear. Will troubleshoot again when got time.

References

Renew Self Signed Certificate Using Synology DSM with custom CA

Renew Self Signed Certificate Using Synology DSM with custom CA

Renew server certificate

  • Launch Control Panel => Security
  • Click on Certificate tab
  • Click on Add button
  • Select Renew certificate, then Next
  • Fill up information for Create certificate signed request (CSR), then Next
  • Click on Download

Following files are created in downloaded ZIP file

  • server.csr
  • server.key

Generate certificate

Following the steps in the page below to create and import the certificates

Use Synology DSM to create Self Signed Certificate with custom CA

References

Use Synology DSM to create Self Signed Certificate with custom CA

Split Mirror in RHEL

Split Mirror in RHEL

To copy one LV to another VG using LV mirror method. This method can not be performed on-line because of vgsplit command.

Note: RHEL doesn't have cplv command.

Steps

vgcreate vg01 /dev/vdb
lvcreate -L 1G -n test1 vg01
vgextend vg01 /dev/vdc
lvconvert --type raid1 --mirrors 1 /dev/vg01/test1 /dev/vdc
lvdisplay -m vg01/test1
lvconvert --splitmirrors 1 --name test2 /dev/vg01/test1
lvdisplay -m vg01/test1
lvdisplay -m vg01/test2
vgchange -a n vg01
vgsplit -t -v /dev/vg01 /dev/vg02 /dev/vdc
vgsplit -v /dev/vg01 /dev/vg02 /dev/vdc
lvs
vgchange -a y vg01
vgchange -a y vg02

References

How to move / copy logical volume (lv) to another volume group (vg)?

OSIM Sundown Marathon 2023

OSIM Sundown Marathon 2023

Event stated at 11:30 PM on 20 May 2023 for 42KM. During running, I got stomachache , could not even have deep breath, only can walk after 25km. Maybe because this is the first time I tried energy gels. Although got vomiting a bit, felt a bit better, still felt pain. Or maybe because ran too fast in first 21km (1:56), wanted to check my half marathon speed. Or maybe because inhaled water a few times, caused severe coughing. Still unclear.

Medals

Result

OSIM Sundown Marathon 2023 - Result Website

By Chip

By start

Map

Map

Race Guide

Race Guide

References

Sundown Website

Fix Synology `Allocation Status` Crashed Error

Fix Synology Allocation Status Crashed Error

I use JBOD for backup volume with checksum turned on, because I don't expect both data on source and backup date lost. The issue of one disk in JBOD volume can cause volume crash, which becomes read only. When checking the the status further, only one disk shows Allocation Status as Crashed but Health Status as Healthy.

In the pass, due to the faulty volume is in read only status, I need to create new folders with new names and copy all data into new folders, then rebuilt the disk array, and move the volume back to new created volume, which requires reconfiguration of permission and services too, such as NFS, Timemachine, Rsync, etc. It can take days to complete all these tasks.

This time, I tried to recover the volume using a few commands.

Steps

Recreate Array

  • Login into command line of Sysnology as root

  • Find the array

# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sda5[0] sdc5[2] sdb5[1]
      1943862912 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md12 : active raid5 sdjc7[5] sdjb7[6] sdjd7[3] sdja7[7] sdje7[8]
      1953467648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md9 : active raid5 sdjc6[9] sdjb6[8] sdja6[6] sdjd6[7] sdje6[5]
      703225088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md6 : active raid5 sdjc5[6] sdjd5[5] sdjb5[9] sdja5[8] sdje5[7]
      1230960384 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md4 : active linear sdg3[0] sdh3[2](E) sdf3[1]
      2915921472 blocks super 1.2 64k rounding [3/3] [UUE]

md10 : active raid5 sdja8[2] sdje8[3] sdjc8[4]
      1953485824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md7 : active raid5 sdib6[4] sdie6[5] sdic6[3] sdia6[2] sdid6[1]
      3906971648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md3 : active raid5 sdie5[5] sdia5[4] sdid5[3] sdib5[7] sdic5[6]
      7794733824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

md8 : active raid5 sdie7[0] sdib7[3] sdic7[2] sdia7[1]
      2930228736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid1 sdh2[5] sdg2[4] sdf2[3] sdc2[2] sdb2[1] sda2[0]
      2097088 blocks [8/6] [UUUUUU__]

md0 : active raid1 sdh1[3] sdg1[4] sdf1[2] sda1[0] sdb1[1] sdc1[6]
      2490176 blocks [8/6] [UUUUU_U_]

unused devices: <none>
  • Collect RAID info
# mdadm --examine /dev/sdh3
/dev/sdh3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 6783225a:318612f7:3473d58a:09a977b2
           Name : ds1812:4  (local to host ds1812)
  Creation Time : Wed Dec 28 07:04:52 2022
     Raid Level : linear
   Raid Devices : 3

 Avail Dev Size : 3897584768 (1858.51 GiB 1995.56 GB)
  Used Dev Size : 0
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=65 sectors
          State : clean
    Device UUID : 14704640:a5536257:40c4ae47:2f008c53

    Update Time : Sat Jan 21 00:36:29 2023
       Checksum : 8685d50c - correct
         Events : 5

       Rounding : 64K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
root@ds1812:~#
  • Unmount the system, if not successful, use force and kill option
# umount -f -k /volume3
  • Stop array
# mdadm --stop /dev/md4
  • Recreate array, answer the question as y
# mdadm --create --force /dev/md4 --metadata==1.2 --raid-devices=3 ---level=linear /dev/sdg3 /dev/sdf3 /dev/sdh3 -u6783225a:318612f7:3473d58a:09a977b2
mdadm: ... appears to be part of a raid array:
       ...
Continue creating array? y

Now, the array has been recreated, and should be in correct state

# cat /proc/mdstat

Check the filesystem and mount it again

The filesystem type is btrfs, so use following command to verify it

# btrfsck /dev/md4
Syno caseless feature on.
Checking filesystem on /dev/md4
UUID: 7a3a3941-e0c4-4505-8981-d309fb9482a5
checking extents
checking free space tree
checking fs roots
checking csums
checking root refs
found 2037124587520 bytes used err is 0
total csum bytes: 1986978456
total tree bytes: 2458648576
total fs tree bytes: 62947328
total extent tree bytes: 50741248
btree space waste bytes: 294577149
file data blocks allocated: 6689106694144
 referenced 1995731652608
root@ds1812:/# echo $?
0

Mount the filesystem, now, the Synology error beep should be stopped

mount /volume3

References

How to handle a drive that has "Allocation Status: Crashed"
[HOWTO] repair a clean volume who stays crashed volume
mdadm(8) — Linux manual page
Manualy repair filesystem command line DS214
How to recover from BTRFS errors