Month: January 2022

Proxmox VM with AVX support

Proxmox VM with AVX support

To install MongoDB in Proxmox VM, the CPU needs to have AVX support. Otherwise, following error appears

WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!

Solution

Change CPU type to host, which by pass the CPU simulation.

For container cluster, such as docker swarm, kubernetes, MongoDB might need to be run on specific worker node which have CPU type equals to host.

Impact

Once the CPU set to be host, the live migration may not work, because the hosts have different CPU type. But host will give maximum performance.

References

AVX2
KVM: Which CPU for VM ('host' vs 'kvm64') to use for web load?

Reset kubernetes master or work

Reset kubernetes master or work

Reinit

After run init, following error was occurred.

dial tcp 127.0.0.1:10248: connect: connection refused.

Run following command to reinit kubernetes master or worker

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset

Master

sudo kubeadm init

Worker

Join cluster

kubeadm join ...

Label it as worker

kubectl label node kworker1 node-role.kubernetes.io/worker=worker

Install Network Policy Provider

Following messages are printed to create pod network.

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Install Weave Net for NetworkPolicy.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

References

Kubernetes kubeadm init fails due to dial tcp 127.0.0.1:10248: connect: connection refused
kubernetes cluster master node not ready
Weave Net for NetworkPolicy

Rescan Proxmox Disks

Rescan Proxmox Disks

There are many reasons that some dangling disk images files exists in Proxmox folder. To remove them from the Proxmox storage, might not be possible, and they might be not shown in VM hardware items as well.

Use rescan command

This is to fix the following issues

  • Disk size change
  • Disk images with no owner
qm rescan

Rename or move old disk

If rescan can not fix the issue, rename the old disk or disk folder, then restart VM to confirm the disk file is not necessary. Then remove disk.

Convert VMware VM to Proxmox VM

Convert VMware VM to Proxmox VM

Create a new VM in Proxmox

Linux

  • Create new VM in Proxmox, and select correct BIOS.

Windows

  • Create new VM in Proxmox, and select OVMF (UEFI).

Remove newly created disk

  • Detach disk
  • Remove detached disk

Convert disk

Run following command to convert vmdk to qcow2

qm importdisk <VM_ID> <Virtual Disk>.vmdk <storage> --format qcow2

Here, VM_ID is a number. After completed a newly created disk appears in VM

Add disk

Ubuntu

Double click the newly created disk, then select VirtIO Block device.

Select Write Back as cache method

RHEL and Windows

Double click the newly created disk, then select SATA device.

Select Write Back as cache method

Start VM and remove VMware Tools

Ubuntu

apt remove --auto-remove open-vm-tools
apt remove --auto-remove xserver-xorg-video-vmware
apt purge open-vm-tools
apt purge open-vm-tools-desktop

RHEL

yum remove open-vm-tools open-vm-tools-desktop

Reboot the server