Correct VIM auto indent issue for YAML files
Add following lines in ~/.vimrc
filetype plugin indent on
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab
Add following lines in ~/.vimrc
filetype plugin indent on
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab
To install MongoDB in Proxmox VM, the CPU needs to have AVX support. Otherwise, following error appears
WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
Change CPU type to host
, which by pass the CPU simulation.
For container cluster, such as docker swarm, kubernetes, MongoDB might need to be run on specific worker node which have CPU type equals to host
.
Once the CPU set to be host
, the live migration may not work, because the hosts have different CPU type. But host
will give maximum performance.
AVX2
KVM: Which CPU for VM ('host' vs 'kvm64') to use for web load?
This is required if host key replaced in target server.
ssh-keygen -R HOSTNAME
ssh-keygen -R IP_ADDRESS
kubeadm token create --print-join-command
Table of Contents
After cloned Ubuntu VM, some tasks are required.
/etc/hostname
/etc/hosts
rm /etc/ssh/ssh_host_*
dpkg-reconfigure openssh-server
Table of Contents
After run init, following error was occurred.
dial tcp 127.0.0.1:10248: connect: connection refused.
Run following command to reinit kubernetes master or worker
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset
sudo kubeadm init
Join cluster
kubeadm join ...
Label it as worker
kubectl label node kworker1 node-role.kubernetes.io/worker=worker
Following messages are printed to create pod network.
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Install Weave Net for NetworkPolicy.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Kubernetes kubeadm init fails due to dial tcp 127.0.0.1:10248: connect: connection refused
kubernetes cluster master node not ready
Weave Net for NetworkPolicy
There are many reasons that some dangling disk images files exists in Proxmox folder. To remove them from the Proxmox storage, might not be possible, and they might be not shown in VM hardware items as well.
This is to fix the following issues
qm rescan
If rescan can not fix the issue, rename the old disk or disk folder, then restart VM to confirm the disk file is not necessary. Then remove disk.
Table of Contents
After Proxmox installed, I also migrate TrueNAS VM to Proxmox as VM for both Ubuntu VM and Windows 10 VM.
The zpool volume device is located in /dev/zvol/<zpool_name>/<zvol_name>
. Create disk image using following command.
dd if=/dev/zvol/pool0/server-xxxxxx of=/tmp/server.raw bs=8m
scp ...
Another way to transfer
dd if=/dev/zvol/.... bs=8192 status=progress | ssh root@proxmox 'dd of=....raw bs=8192'
or
dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'dd of=....raw.gz bs=8192'
or
dd if=/dev/zvol/.... bs=8192 status=progress | gzip -1 - | ssh root@proxmox 'gunzip - | dd of=....raw.gz bs=8192'
Create a VM with OVMF (UEFI) if TrueNAS VM is using UEFI
Remove VM disk
Use following command to import disk
qm importdisk <vm_id> <raw_file> <storage_id>
For example
qm importdisk 100 vm.raw ds1812-vm_nfs1
Go to VM hardware page
Select unused disk and click Add button to add disk into VM
Select Options => Boot Order to check the iscsi/sata controller
Export Virtual Machine from TrueNAS and Import VM to Proxmox
Migration of servers to Proxmox VE
Additional ways to migrate to Proxmox VE