Category: Computer

Computer is miraculous!

Change Expected Votes for Proxmox Cluster

Change Expected Votes for Proxmox Cluster

If two or more nodes down in Proxmox cluster, then user can not login to Proxmox web page. In order to login, the number of Expected votes needs to be changed.

This change is only temporary, and cannot be changed to smaller than current Total votes.

Steps

Run following command to check status

pvecm status

Run following command to change Expected votes

pvecm expected 2

Change node vote

Change quorum_votes in file /etc/pve/corosync.conf, to set different quorum vote for each node.

The vote of node can be 0, if this node is only a test node in the cluster.
The vote of node can be more than 1, if the node has more important role, such as TrueNAS running.

References

cluster quorum

Moving Proxmox VE Server to another Machine

Moving Proxmox to another Machine

The reason to move Proxmox VE server to another machine is, I got issue when booting up Proxmox installation USB disk from a MacBook Pro. So I decided to use existing Proxmox VE server USB disk boot from this MacBook Pro.

Requirement

The previous Proxmox Virtual Environment USB disk, must be an UEFI disk, because MacBook Pro is a UEFI machine.

After boot

The network configuration /etc/network/interfaces needs to be changed due to different network interface name.

First, change the interface name, which can be found using ip a command, the two lines need to be updated.

auto lo
iface lo inet loopback

iface enp0s10 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.205/24
        gateway 192.168.1.254
        bridge-ports enp0s10
        bridge-stp off
        bridge-fd 0

The WIFI interface can be disabled if it is not used.

#iface wlp1s0 inet manual

References

Migrate USB UEFI boot with iSCSI root Ubuntu to Proxmox VM

Migrate USB boot with iSCSI root Ubuntu to Proxmox VM

To convert Ubuntu to Proxmox Virtual Environment, the migration is required.

Ubuntu configuration

The Ubuntu server has following configuration

  • Boots from USB device with /boot and /boot/efi filesystems.
  • Connect to iSCSI host using GRUB2 configuration
  • Root file system / is on iSCSI disk

Conversion

Create Proxmox VM

  • Create VM with 2GB disk
  • BIOS type is UEFI
  • Add EFI disk
  • Add Ubuntu Live CD and boot from CD

Create partition

Duplicating USB device partition to 2GB VM disk

Create filesystems

mkfs.vfat /dev/sda1
mkfs.btrfs /dev/sda2

Duplicate UUID

Duplicate UUID for /boot/efi

If don't change UUID for /boot/efi, later will need to change /etc/fstab file after reboot.

Duplicate UUID for /boot

Using following command to duplicate UUID for BTRFS filesystem

  • Retrieve partition from USB Ubuntu

    sfdisk -d /dev/sda
  • Create partitions on 2GB VM disk

  • Duplicate UUID of partition /boot/efi

  • Duplicate UUID of partition /boot

    btrfstune -U  /dev/sda2

Change network interface name in iSCSI configuration in Grub

  • Retrieve network interface name
ip a
  • Mount boot filesystem
mount /dev/sda2 /boot
  • Edit file /boot/grub/grub.cfg

Change all interface names in the grub.cfg.

linux /vmlinuz-5.4.0-113-generic ... ip=192.168.1.99::192.168.1.254:255.255.255.0:fish:ensXX::192.168.1.55

Reboot VM

References

Modifying a BTRFS filesystem UUID

Change partition UUID in Ubuntu

Change partition UUID in Ubuntu

Generate UUID

uuidgen

Change one partition

sgdisk -U <uuid> /dev/sda1

Change multiple partitions

Run following command to retrieve partitions info

sfdisk -d /dev/sda > /tmp/sda.dsk

Edit the UUID in the file /tmp/sda.dsk.

Run following command to reimport the modified partitions

sfdisk /dev/sda < /tmp/sda.dsk

References

Proxmox VM migration failed – no local-zfs rpool

Proxmox VM migration failed - no local-zfs rpool

When try to migrate VM from one node to another, following error encountered

Failed to sync data - could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available

The reason is two nodes have different storage pool

Solution

Change source node storage pool local-zfs as below.

  • Select Datacenter -> Storage
  • Select storage pool local-zfs, and click Edit
  • Change Nodes from All (No restrictions) to the node the stroage belongs to
  • Click OK to save the option

References

Migration of VM between nodes failed - could not activate storage 'local-zfs', zfs error: cannot imp

Add a Proxmox Node to Cluster

Add a Proxmox Node to Cluster

When using UI Web interface to add node into cluster, the following error occurred

ERROR: TFA-enabled login currently works only with a TTY. at /usr/share/perl5/PVE/APIClient/LWP.pm line 100

Solution

Use command line below to add node via Shell

pvecm add <target ip> -link0 <source ip>

If got error on key validation, try node name instead

pvecm add <target_dns_name>

References

Convert Proxmox Cluster Node to Standalone Local Mode

Convert Proxmox Cluster Node to Standalone PVE

When adding the Proxmox node into existing cluster, the IP to DNS reverse lookup has different name, miss configuration. Then the new node thought it is already a member of cluster, but other nodes are not.

Solution

Convert the node back to local mode

Convert the node

Stop the corosync and pve-cluster services on the node:

systemctl stop pve-cluster
systemctl stop corosync

Start the cluster file system again in local mode:

pmxcfs -l

Delete the corosync configuration files:

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

Start the file system again as a normal service:

killall pmxcfs
systemctl start pve-cluster

The node is now separated from the cluster.

Remove the node from cluster

Deleted it from any remaining node of the cluster if it is already a node of cluster

pvecm delnode oldnode

If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:

pvecm expected 1

And then repeat the pvecm delnode command.

Cleanup the cluster files

This ensures that the node can be added to another cluster again without problems.

rm /var/lib/corosync/*

Remove /etc/pve/nodes/<node_name> from other nodes.

Stop remove access

Remove ssh key from /etc/pve/priv/authorized_keys file

References

Remove a cluster node

Synchronous Proxmox Nodes UI certificates

Synchronous Proxmox Nodes UI certificates

If same certificate can be used for multiple domains in Proxmox clusters' nodes, then can use following steps to synchronous certificates.

  • Login to the new node (target node)
  • Change to current node directory /etc/pve/nodes/<target_node_name>
  • Copy two files pveproxy-ssl.pem and pveproxy-ssl.key in /etc/pve/nodes/<source_node_name> directory into target node directory.
  • Restart pveproxy service using command systemctl restart pveproxy.
  • Refresh UI webpage

References