Convert Proxmox Template to VM
In Proxmox web interface, there is no option to convert back template to VM. In order to do this, use Shell session, delete template: 1
line in the configuration file under /etc/pve/qemu-server/<vm_id>.conf
file.
Computer is miraculous!
In Proxmox web interface, there is no option to convert back template to VM. In order to do this, use Shell session, delete template: 1
line in the configuration file under /etc/pve/qemu-server/<vm_id>.conf
file.
List down the commands required.
export VAULT_ADDR='https://vault.bx.net:8200'
export VAULT_TOKEN="<ROOT_TOKEN>"
vault token create -field token -policy=ssh-admin-policy
export VAULT_TOKEN="<SSH_ADMIN_TOKEN>"
vault token renew
export VAULT_TOKEN="<SSH_ADMIN_TOKEN>"
vault token lookup
vault write -field=signed_key ssh-client-signer/sign/my-role public_key=@$HOME/.ssh/id_rsa.pub > ~/.ssh/signed-cert.pub
ssh -i ~/.ssh/signed-cert.pub -i ~/.ssh/id_rsa <host>
export VAULT_ADDR='https://vault.bx.net:8200'
export VAULT_TOKEN="<ROOT_TOKEN>"
vault read -field=public_key ssh-client-signer/config/ca > /etc/ssh/trusted-user-ca-keys.pem
/etc/ssh/sshd_config
Add following lines in /etc/ssh/sshd_config
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
CASignatureAlgorithms ^ssh-rsa
Note: Comment out last line if SSH got error
The SSL cert in vault server needs to be trusted by local client, otherwise, following server occurred.
Error writing data to ssh-client-signer/sign/my-role: Put "<role_name>": x509: certificate signed by unknown authority
When list docker containers using docker ps -a
command, there is nothing in the list, but using systemctl status docker
can see running processes. And the docker container services are running in the servers.
The issue was fixed by removing docker and lxc from snap.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
Pros
/usr/local/bin
, same location as kubectl
, can be run by normal userCons
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Pros
/usr/sbin
, it is difficult to be run by normal user if path is not defined $PATH.Cons
apt
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Pros
sudo snap install helm --classic
After external service created in Kubernetes, the external IPs are not assigned unless the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.
In such case, all internal IPs are able to be accessed using service port. This is the same as Docker Swarm.
Run following command to assign an external IP
minikube service <service_name>
Another one is to run minikube tunnel
to assign the IP.
Manually assign IP using following configuration file
spec:
type: LoadBalancer
externalIPs:
- 192.168.0.10
MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation.
Kubernetes service external ip pending
Using minikube tunnel
Ingress class
Load Balancer Service type for Kubernetes
Service Mesh - Kubernetes LoadBalancer Service External IP pending
MetalLB
Service Mesh - Build Kubernetes & Istio environment with kubeadm and MetalLB
The error is shown as below
The validation information class requested was invalid. (0x80070544).
To fix this issue, prefix the username with target system name. For example
truenas\backup_user
The error is shown as below
0x80070544: The specified network location cannot be used.
This is because no write access to the target folder.
Error code 0x80070544 when attempting to back up Windows 8 onto NAS over Samba
Add following lines in ~/.vimrc
filetype plugin indent on
autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab
To install MongoDB in Proxmox VM, the CPU needs to have AVX support. Otherwise, following error appears
WARNING: MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!
Change CPU type to host
, which by pass the CPU simulation.
For container cluster, such as docker swarm, kubernetes, MongoDB might need to be run on specific worker node which have CPU type equals to host
.
Once the CPU set to be host
, the live migration may not work, because the hosts have different CPU type. But host
will give maximum performance.
AVX2
KVM: Which CPU for VM ('host' vs 'kvm64') to use for web load?