Blog

Blog

Proxmox VM with AVX support

Proxmox VM with AVX support

To install MongoDB in Proxmox VM, the CPU needs to have AVX support. Otherwise, following error appears

WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!

Solution

Change CPU type to host, which by pass the CPU simulation.

For container cluster, such as docker swarm, kubernetes, MongoDB might need to be run on specific worker node which have CPU type equals to host.

Impact

Once the CPU set to be host, the live migration may not work, because the hosts have different CPU type. But host will give maximum performance.

References

AVX2
KVM: Which CPU for VM ('host' vs 'kvm64') to use for web load?

Tasks after cloned Ubuntu VM

Tasks after cloned Ubuntu VM

After cloned Ubuntu VM, some tasks are required.

Change IP

Change Hostname

  • Change /etc/hostname
  • Change /etc/hosts

Recreate SSH keys

rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server

References

How To: Ubuntu / Debian Linux Regenerate OpenSSH Host Keys

Reset kubernetes master or work

Reset kubernetes master or work

Reinit

After run init, following error was occurred.

dial tcp 127.0.0.1:10248: connect: connection refused.

Run following command to reinit kubernetes master or worker

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset

Master

sudo kubeadm init

Worker

Join cluster

kubeadm join ...

Label it as worker

kubectl label node kworker1 node-role.kubernetes.io/worker=worker

Install Network Policy Provider

Following messages are printed to create pod network.

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Install Weave Net for NetworkPolicy.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

References

Kubernetes kubeadm init fails due to dial tcp 127.0.0.1:10248: connect: connection refused
kubernetes cluster master node not ready
Weave Net for NetworkPolicy

Rescan Proxmox Disks

Rescan Proxmox Disks

There are many reasons that some dangling disk images files exists in Proxmox folder. To remove them from the Proxmox storage, might not be possible, and they might be not shown in VM hardware items as well.

Use rescan command

This is to fix the following issues

  • Disk size change
  • Disk images with no owner
qm rescan

Rename or move old disk

If rescan can not fix the issue, rename the old disk or disk folder, then restart VM to confirm the disk file is not necessary. Then remove disk.

Migrate TrueNAS VM to Proxmox VM

Migrate TrueNAS VM to Proxmox VM

After Proxmox installed, I also migrate TrueNAS VM to Proxmox as VM for both Ubuntu VM and Windows 10 VM.

Copy zvol to a file and transfer to Proxmox server

The zpool volume device is located in /dev/zvol/<zpool_name>/<zvol_name>. Create disk image using following command.

dd if=/dev/zvol/pool0/server-xxxxxx of=/tmp/server.raw bs=8m
scp ...

Another way to transfer

dd if=/dev/zvol/.... bs=8192 status=progress | ssh root@proxmox 'dd of=....raw bs=8192'

or

dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'dd of=....raw.gz bs=8192'

or

dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'gunzip - | dd of=....raw.gz bs=8192'

Transfer raw file into Proxmox server

Create Proxmox VM

  • Create a VM with OVMF (UEFI) if TrueNAS VM is using UEFI

  • Remove VM disk

  • Use following command to import disk

qm importdisk <vm_id> <raw_file> <storage_id>

For example

qm importdisk 100 vm.raw ds1812-vm_nfs1
  • Go to VM hardware page

  • Select unused disk and click Add button to add disk into VM

    • For Linux, select SCSI as controller
    • For Windows, select SATA
  • Select Options => Boot Order to check the iscsi/sata controller

Boot TrueNAS VM

References

Export Virtual Machine from TrueNAS and Import VM to Proxmox
Migration of servers to Proxmox VE
Additional ways to migrate to Proxmox VE