Tag: remove

Convert Proxmox Cluster Node to Standalone Local Mode

Convert Proxmox Cluster Node to Standalone PVE

When adding the Proxmox node into existing cluster, the IP to DNS reverse lookup has different name, miss configuration. Then the new node thought it is already a member of cluster, but other nodes are not.

Solution

Convert the node back to local mode

Convert the node

Stop the corosync and pve-cluster services on the node:

systemctl stop pve-cluster
systemctl stop corosync

Start the cluster file system again in local mode:

pmxcfs -l

Delete the corosync configuration files:

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

Start the file system again as a normal service:

killall pmxcfs
systemctl start pve-cluster

The node is now separated from the cluster.

Remove the node from cluster

Deleted it from any remaining node of the cluster if it is already a node of cluster

pvecm delnode oldnode

If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:

pvecm expected 1

And then repeat the pvecm delnode command.

Cleanup the cluster files

This ensures that the node can be added to another cluster again without problems.

rm /var/lib/corosync/*

Remove /etc/pve/nodes/<node_name> from other nodes.

Stop remove access

Remove ssh key from /etc/pve/priv/authorized_keys file

References

Remove a cluster node

List zfs Filesystems By Creation Date

List zfs Filesystems By Creation Date

There are many snapshots in Ubuntu system if using zfs as OS filesystem. In order to remove those old snapshots, need to list them by creation date using following command

zfs list -H -t snapshot -o name -S creation

To remove those old snapshots, for example, the oldest 18 snapshots can following command

zfs list -H -t snapshot -o name -S creation | tail -18 | xargs -n 1 zfs destroy

References

How to delete all but last [n] ZFS snapshots?

Docker folder removed after removed Docker package in Synology

Docker folder removed after removed Docker package in Synology

I was changing harddisk in Synology DSM 7 in the volume with Docker package installed by recreating the volume, it requires docker package to be removed. I thought the docker folder (/volumeX/docker) might not removed, so backed up the container images into docker folder. But I was wrong, the docker folder was removed.

The docker folder was created by Docker package, which can be moved to other volume after stopped Docker package. I didn't see any data in it, and I don't know what the usage of this folder is and it is zero in size.