Tag: proxmox

Convert Proxmox Cluster Node to Standalone Local Mode

Convert Proxmox Cluster Node to Standalone PVE

When adding the Proxmox node into existing cluster, the IP to DNS reverse lookup has different name, miss configuration. Then the new node thought it is already a member of cluster, but other nodes are not.

Solution

Convert the node back to local mode

Convert the node

Stop the corosync and pve-cluster services on the node:

systemctl stop pve-cluster
systemctl stop corosync

Start the cluster file system again in local mode:

pmxcfs -l

Delete the corosync configuration files:

rm /etc/pve/corosync.conf
rm -r /etc/corosync/*

Start the file system again as a normal service:

killall pmxcfs
systemctl start pve-cluster

The node is now separated from the cluster.

Remove the node from cluster

Deleted it from any remaining node of the cluster if it is already a node of cluster

pvecm delnode oldnode

If the command fails due to a loss of quorum in the remaining node, you can set the expected votes to 1 as a workaround:

pvecm expected 1

And then repeat the pvecm delnode command.

Cleanup the cluster files

This ensures that the node can be added to another cluster again without problems.

rm /var/lib/corosync/*

Remove /etc/pve/nodes/<node_name> from other nodes.

Stop remove access

Remove ssh key from /etc/pve/priv/authorized_keys file

References

Remove a cluster node

Synchronous Proxmox Nodes UI certificates

Synchronous Proxmox Nodes UI certificates

If same certificate can be used for multiple domains in Proxmox clusters' nodes, then can use following steps to synchronous certificates.

  • Login to the new node (target node)
  • Change to current node directory /etc/pve/nodes/<target_node_name>
  • Copy two files pveproxy-ssl.pem and pveproxy-ssl.key in /etc/pve/nodes/<source_node_name> directory into target node directory.
  • Restart pveproxy service using command systemctl restart pveproxy.
  • Refresh UI webpage

References

Restore Default Proxmox UI Certificate

Restore Default Proxmox UI Certificate

After install custom certificate, the Proxmox UI could not be displayed.

Solution

Remove two files pveproxy-ssl.pem and pveproxy-ssl.key in /etc/pve/nodes/<node_name> directory. Then restart Proxmox.

References

Unable to access GUI after uploading my certificates
Proxmox Certificate Management

Install Vagrant on CentOS in Proxmox

Install Vagrant on CentOS

Steps

Check version

The latest version of Vagrant can be found in https://releases.hashicorp.com/vagrant.

Install

yum install https://releases.hashicorp.com/vagrant/2.2.19/vagrant_2.2.19_x86_64.rpm

Verify

vargant --version

Init CentOS 7 with Vagrant

sudo mkdir ~/vagrant-centos-7
cd ~/vagrant-centos-7
vagrant box add centos/7

Create Vagrantfile

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
end

Start Vagrant

vagrant up

SSH

vagrant ssh

Halt Vagrant

vagrant halt

Destroy Vagrant

vagrant destroy

Troubleshooting

If the following error appeared, change the CPU type of CentOS VM in Proxmox to host.

Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX)

References

How to Install Vagrant on CentOS 7
centos/8 Vagrant box

Convert VMware VM to Proxmox VM

Convert VMware VM to Proxmox VM

Create a new VM in Proxmox

Linux

  • Create new VM in Proxmox, and select correct BIOS.

Windows

  • Create new VM in Proxmox, and select OVMF (UEFI).

Remove newly created disk

  • Detach disk
  • Remove detached disk

Convert disk

Run following command to convert vmdk to qcow2

qm importdisk <VM_ID> <Virtual Disk>.vmdk <storage> --format qcow2

Here, VM_ID is a number. After completed a newly created disk appears in VM

Add disk

Ubuntu

Double click the newly created disk, then select VirtIO Block device.

Select Write Back as cache method

RHEL and Windows

Double click the newly created disk, then select SATA device.

Select Write Back as cache method

Start VM and remove VMware Tools

Ubuntu

apt remove --auto-remove open-vm-tools
apt remove --auto-remove xserver-xorg-video-vmware
apt purge open-vm-tools
apt purge open-vm-tools-desktop

RHEL

yum remove open-vm-tools open-vm-tools-desktop

Reboot the server

Install VMware vSphere 7.0 on Proxmox

Install VMware vSphere 7.0 on Proxmox

Verify

root@proxmox:~# cat /sys/module/kvm_intel/parameters/nested
Y

Enable

Intel CPU

echo "options kvm-intel nested=Y" > /etc/modprobe.d/kvm-intel.conf

AMD CPU

echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

Install Module

modprobe -r kvm_intel
modprobe kvm_intel

Note: more info, check https://pve.proxmox.com/wiki/Nested_Virtualization

Install

ISO

Download ISO, such as VMware-VMvisor-Installer-7.0U2a-17867351.x86_64.iso

VM Configure

  • General Tab

    • Name:
  • OS Tab

    • Type: Linux
    • Version: 5.x – 2.6 Kernel
  • System Tab

    • Graphic card: Default
    • SCSI Controller: VMware PVSCSI
    • BIOS: SeaBIOS (OVMF (UEFI) should work too)
    • Machine: q35
  • Hard Disk Tab

    • Bus/Device: SATA
    • Disk size (GiB): 16
  • CPU Tab

    • Cores: 4 (At least 2, 4 will be better if our physical CPU has enough cores)
    • Type: host (or Default (kvm64))
    • Enable NUMA: Check (if possible)
  • Memory Tab

    • Memory (MiB): 4096 (At least 4096, better if assign more)
    • Ballooning Device: Uncheck
  • Network Tab

    • Model: VMware vmxnet3

References

Nested Virtualization
How to Install/use/test VMware vSphere 7.0 (ESXi 7.0) on Proxmox VE 6.3-3 (PVE)

Proxmox VM with AVX support

Proxmox VM with AVX support

To install MongoDB in Proxmox VM, the CPU needs to have AVX support. Otherwise, following error appears

WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!

Solution

Change CPU type to host, which by pass the CPU simulation.

For container cluster, such as docker swarm, kubernetes, MongoDB might need to be run on specific worker node which have CPU type equals to host.

Impact

Once the CPU set to be host, the live migration may not work, because the hosts have different CPU type. But host will give maximum performance.

References

AVX2
KVM: Which CPU for VM ('host' vs 'kvm64') to use for web load?