Unlock Proxmox VM
VM in Proxmox was locked as Config Locked (Suspended). To unlock it, run following command in shell
qm unlock <vm_id>
Computer is miraculous!
VM in Proxmox was locked as Config Locked (Suspended). To unlock it, run following command in shell
qm unlock <vm_id>
If two or more nodes down in Proxmox cluster, then user can not login to Proxmox web page. In order to login, the number of Expected votes
needs to be changed.
This change is only temporary, and cannot be changed to smaller than current Total votes
.
Run following command to check status
pvecm status
Run following command to change Expected votes
pvecm expected 2
Change quorum_votes
in file /etc/pve/corosync.conf
, to set different quorum vote for each node.
The vote of node can be 0, if this node is only a test node in the cluster.
The vote of node can be more than 1, if the node has more important role, such as TrueNAS running.
The reason to move Proxmox VE server to another machine is, I got issue when booting up Proxmox installation USB disk from a MacBook Pro. So I decided to use existing Proxmox VE server USB disk boot from this MacBook Pro.
The previous Proxmox Virtual Environment USB disk, must be an UEFI disk, because MacBook Pro is a UEFI machine.
The network configuration /etc/network/interfaces
needs to be changed due to different network interface name.
First, change the interface name, which can be found using ip a
command, the two lines need to be updated.
auto lo
iface lo inet loopback
iface enp0s10 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.1.205/24
gateway 192.168.1.254
bridge-ports enp0s10
bridge-stp off
bridge-fd 0
The WIFI interface can be disabled if it is not used.
#iface wlp1s0 inet manual
To convert Ubuntu to Proxmox Virtual Environment, the migration is required.
The Ubuntu server has following configuration
/boot
and /boot/efi
filesystems./
is on iSCSI diskDuplicating USB device partition to 2GB VM disk
mkfs.vfat /dev/sda1
mkfs.btrfs /dev/sda2
/boot/efi
If don't change UUID for /boot/efi
, later will need to change /etc/fstab
file after reboot.
/boot
Using following command to duplicate UUID for BTRFS filesystem
Retrieve partition from USB Ubuntu
sfdisk -d /dev/sda
Create partitions on 2GB VM disk
Duplicate UUID of partition /boot/efi
Duplicate UUID of partition /boot
btrfstune -U /dev/sda2
ip a
mount /dev/sda2 /boot
/boot/grub/grub.cfg
Change all interface names in the grub.cfg.
linux /vmlinuz-5.4.0-113-generic ... ip=192.168.1.99::192.168.1.254:255.255.255.0:fish:ensXX::192.168.1.55
uuidgen
sgdisk -U <uuid> /dev/sda1
Run following command to retrieve partitions info
sfdisk -d /dev/sda > /tmp/sda.dsk
Edit the UUID in the file /tmp/sda.dsk
.
Run following command to reimport the modified partitions
sfdisk /dev/sda < /tmp/sda.dsk
When try to migrate VM from one node to another, following error encountered
Failed to sync data - could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available
The reason is two nodes have different storage pool
Change source node storage pool local-zfs
as below.
local-zfs
, and click Edit
All (No restrictions)
to the node the stroage belongs toOK
to save the optionMigration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp
pvecm create <cluster_name>
Run following command from new node
pvecm add <target_node>
When using UI Web interface to add node into cluster, the following error occurred
ERROR: TFA-enabled login currently works only with a TTY. at /usr/share/perl5/PVE/APIClient/LWP.pm line 100
Use command line below to add node via Shell
pvecm add <target ip> -link0 <source ip>
If got error on key validation, try node name instead
pvecm add <target_dns_name>
When adding the Proxmox node into existing cluster, the IP to DNS reverse lookup has different name, miss configuration. Then the new node thought it is already a member of cluster, but other nodes are not.
Convert the node back to local mode
Stop the corosync
and pve-cluster
services on the node:
systemctl stop pve-cluster
systemctl stop corosync
Start the cluster file system again in local mode:
pmxcfs -l
Delete the corosync
configuration files:
rm /etc/pve/corosync.conf
rm -r /etc/corosync/*
Start the file system again as a normal service:
killall pmxcfs
systemctl start pve-cluster
The node is now separated from the cluster.
Deleted it from any remaining node of the cluster if it is already a node of cluster
pvecm delnode oldnode
If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:
pvecm expected 1
And then repeat the pvecm delnode command.
This ensures that the node can be added to another cluster again without problems.
rm /var/lib/corosync/*
Remove /etc/pve/nodes/<node_name>
from other nodes.
Remove ssh key from /etc/pve/priv/authorized_keys file
If same certificate can be used for multiple domains in Proxmox clusters' nodes, then can use following steps to synchronous certificates.
/etc/pve/nodes/<target_node_name>
pveproxy-ssl.pem
and pveproxy-ssl.key
in /etc/pve/nodes/<source_node_name>
directory into target node directory.pveproxy
service using command systemctl restart pveproxy
.