Document converter - pandoc
To convert files from one markup format into another.
To convert files from one markup format into another.
Pod
Service
permanet IP address
lifecycle of Pod and Service NOT connected
External services: http://my-app-service-ip:port
Internal service: http://db-service-ip:port
Ingress
Deployment
StatefulSet
Volumes
Secrets
ConfigMap
Nodes
Api Server
Scheduler
Controller manager
etcd
1 Node K8s cluster
brew update
brew install hyperkit
brew install minikube
kubectl
minikube start --vm-driver=hyperkit
kubectl get nodes
minikube status
kubectl version
kubectl get services
kubectl get pod
kubectl create deployment NAME --image=image [--dry-run] [options]
kubectl create deployment nginx-depl --image=nginx
kubectl get deployment
kubectl get replicaset
kubectl get pod
For example, change version of image
kubectl edit deployment nginx-depl
Then change the version of image. To show pods actions, run following commands
kubectl get pod
kubectl get replicaset
Old one has been deleted, new one has been created.
kubectl logs nginx-depl-66859c8f65-vfjjk
mongodb
deploymentkubectl create deployment mongo-depl --image=mongo
kubectl get pod
kubectl logs mongo-depl-67f895857c-fkspm
kubectl describe pod mongo-depl-67f895857c-fkspm
Run shell in pod
kubectl exec -it mongo-depl-67f895857c-fkspm -- bin/bash
kubectl delete deployment mongo-depl
kubectl get pod
kubectl get replicaset
Create configuration file called nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicase: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16
ports:
- containerPort: 8080
Note: The first spec is for deployment, the inner spec is for pod.
Apply configuration
kubectl apply -f nginx-deployment.yaml
kubectl get pod
kubectl get deployment
Change deployment can be done by editing deployment file and apply again.
For service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- portocol: TCP
port: 80
targetPort: 8080
etcd
Note: Can use YAML data validator to validate the YAML file.
In previous example, the pod configuration is in deployment configuration under spec
, and named as template
In deployment file
Pod label: template label
Selector matchLabels: tell the deployment to connect or match all the labels to create the connection
Deployment label: Used by service selector
In service file
In deployment file, define the ports of pods
In service file, connect to the ports of pods
For example: DB Service -> port: 80 -> nginx Service -> targetPort:8080 -> Pod
kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
kubectl get pod
kubectl get service
kubectl describe service nginx-service
The Endports
are the ports that the service must forward to, which can be found using -o wide
option
kubectl get pod -o wide
To get deployment status in ectd
kubectl get deployment nginx-deployment -o yaml
kubectl delete -f nginx-service.yaml
Check all components
kubectl get all
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWU=
mongo-root-password: cGFzc3dvcmQ=
To generate the base64 string for username and password
echo -n 'username' | base64
kubectl apply -f mongodb-secret.yaml
kubectl get secret
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicase: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
kubectl apply -f mongo.yaml
kubectl get all
kubectl get pod
kubectl get pod --watch
kubectl describe pod mongodb-deployment-78444d94d6-zsrcl
*Note: If want to put multiple YAML files into one, put ---
in front of new file
Create service YAML in mongodb.yaml
file as they belong together
...
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- portocol: TCP
port: 27017
targetPort: 27017
kubectl apply -f mongo.yaml
kubectl get service
kubectl describe service mongodb-service
kubectl get pod -o wide
Display service, deployment, replicaset and pod
kubectl get all | grep mongodb
Create a file called mongo-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-configmap
data:
database_url: mongodb-service
Note: The database_url
is the service name, which is only the value of it. How to use it is depending on the application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-express
labels:
app: mongo-express
spec:
replicas: 1
selector:
matchLabels:
app: mongo-express
template:
metadata:
labels:
app: mongo-express
spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
kubectl apply -f mongo-configmap.yaml
kubectl apply -f mongo-express.yaml
kubectl get pod
kubectl get logs mongo-express-797845bd97-p9grr
Append following configuration behind mongo-express.yaml
file
...
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-ervice
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
Note: Set type as LoadBalancer
to define external service, and set nodePort
between 30000-32767
kubectl appy -f mongo-express.yaml
kubectl get service
Note: The external services are shown as LoadBalancer, internal services are defined as ClusterIP which is DEFAULT.
minikube service mongo-express-service
kubectl get namespace
kubectl cluster-info
kubectl create namespace my-namespace
kubectl get namespace
Officially: Should not use for smaller projects
Can deploy common resources into separate namespace, such as Nginx-Ingress Controller, or Elastic Stack.
Different versions of deployments use common resources, such as database, Nginx-Ingress Controller or Elastic Stack.
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-configmap
data:
db_url: mysql-service.database
Here, database
is the namespace.
Apply
kubectl apply -f mysql-configmap.yaml
kubectl get configmap
kubectl get configmap -n default
This configmap is created in default
namespace.
kubectl apply -f mysql-configmap.yaml --namespace=my-namespace
kubectl get configmap -n my-namespace
This configmap is created in my-namespace
namespace.
or
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-configmap
namespace: my-namespace
data:
db_url: mysql-service.database
Some resources can not be created a Namespace level, such as volume, node.
kubectl api-resources --namespaced=false
kubectl api-resources --namespaced=true
brew install kubectx
kubens
kubens my-namespace
This will change the default behavior of namespace from default
namespace to my-namespace
Normal practice is
browser -> entrypoint -> Ingress Controller -> Internal services
apiVersion: v1
kind: Service
metadata:
name: myapp-external-service
spec:
selector:
app: myapp
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodeProt:35010
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- backend:
serviceName: myapp-internal-service
servicePort: 8080
rules
is the routing ruleshost
is the host specified in browserpaths
is the path in URL after the hostserviceName
is the backend service namehttp
is the internal communication, not for the external serviceExample of internal service:
apiVersion: v1
kind: Service
metadata:
name: myapp-internal-service
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 8080
targetPort: 8080
For external service vs internal service
myapp.com
should be a vaild domain addressThe entrypoint can be one of the node in k8s cluster or the ingress server outside the k8s cluster.
Can be Ingress Controller Pod, evaluates and processes Ingress rules
Automatically starts the K8s Nginx implementation of Ingress Controller
minikube addons enable ingress
kubectl get pod -n kube-system
Following port will be running
nginx-ingress-controller-xxxx
kubectl get ns
For example, configure to access kubernetes-dashboard from external
dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
This is to divert all requests to dashboard.com to backend service kubernetes-dashboard at port 80
Note: Updated version is as below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
kubectl apply -f dashboard-ingress.yaml
kubectl get ingress -n kubernetes-dashboard
kubectl get ingress -n kubernetes-dashboard --watch
Define dashboard.com in /etc/hosts
192.168.64.5 dashboard.com
Default backend: Whenever the request come to cluster that is not mapped to any backend service, no rule to map to any backend service, then this default backend is to handle those request. This is the default response, such as file not found response, or redirect to some other service.
$ kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
...
Default backend: default-http-backend:80 (<none>)
...
To configure default backend, just need to do is create an internal service with same name as default-http-backend
and port 80
for custom message response.
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-response-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.com
http:
paths:
- path: /analytics
backend:
serviceName: analytics-service
servicePort: 3000
- path: /shopping
backend:
serviceName: shopping-service
servicePort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: analytics.myapp.com
http:
paths:
backend:
serviceName: analytics-service
servicePort: 3000
- host: shopping.myapp.com
http:
paths:
backend:
serviceName: shopping-service
servicePort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- host: myapp.com
secretName: myapp-secret-tls
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-internal-service
servicePort: 8080
apiVersion: v1
kind: Secret
metadata:
name: myapp-secret-tls
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
Note:
Package Manager for Kubernetes: To package YAML files and distribute them in public and private repositories
Such as
Search using following commands or Helm Hub
helm search <keyword>
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.container.image }}
port: {{ .Values.container.port }}
The values are from values.yaml
name: my-app
container:
name: my-app-container
image: my-app-image
port: 9001
Here, the .Value is an object, which is created based on the values defined.
Values defined either via yaml file or with --set flag.
mychart/
Chart.yaml
values.yaml
charts/
templates/
mychart/
folder is the name of chart as wellChart.yaml
has the meta information about chart, such as name dependencies, versionvalues.yaml
has vaules for the template filescharts/
is the chart dependenciestemplates/
folder is the actual template filesCommands
helm install <chartname>
values.yaml
The final values will be saved in .Values
object
--values
optionhelm install --values=my-values.yaml <chartname>
For example, the my-values.yaml
file can override vesrions
value.
--set
optionhelm install --set version=2.2.0
With server called Tiller
. The client run following install command, will send requests to Tiller, that actually runs in a Kubernetes cluster.
helm install <chartname>
Whenever create or change deployment, Tiller will store a copy of configuration for release management.
When run upgrade command below, the changes are applied to existing deployment instead of creating a new one.
helm upgrade <chartname>
Also can handle rollbacks
helm rollback <chartname>
Downsides
Tiller has too much power inside of K8s cluster
Security Issue
Solves the security Concern
In Helm 3, Tiller got removed.
a cluster resource
created via YAML file
What Type of storage do you need?
You need to create and manage them by yourself
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-name
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.0
nfs:
path: /dir/path/on/nfs/server
server: nfs-server-ip-address
apiVersion: v1
kind: PersistenVolume
metadata:
name: test-volume
labels:
failure-domain.beta.kubernetes.io/zone: us-central1-a__us-centrall-b
spec:
capacity:
storage: 400Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
Note: The gcePersistentDisk
is the Google Cloud parameters
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
Local volumes should not be used as PV
K8s Admin sets up and maintains the cluster, and make sure has enough resource.
K8s User deploys application in cluster
Define a PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-name
spec:
storageClassName: manual
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Use that PVC in Pods configuration
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypod
volumes:
- name: mypd
persistentVoumeClaim:
claimName: pvc-name
PVC must be in the same namespace.
The advantage of having separate PV and PVC is to abstract the usage of volume which doesn't need to know the actual storage location and it's type, easier for developers.
This can be done by
Can configure different volumes with different types in pod
appVersion: v1
kind: Deployment
metadata:
name: elastic
spec:
selector:
matchLabels:
app: elastic
template:
metadata:
labels:
app: elastic
spec:
containers:
- image: elastic:latest
name: elastic-container
ports:
- containerPort: 9200
volumeMounts:
- name: es-persistent-storage
mountPath: /var/lib/data
- name: es-secret-dir
mountPath: /var/lib/secret
- name: es-config-dir
mountPath: /var/lib/config
volumes
- name: es-persistent-storage
persistentVolumeClaim:
claimName: es-pv-claim
- name: es-secret-dir
secret:
secretName: es-secret
- name: es-config-dir
configMap:
name: es-config-map
Storage Class provisions Persistent Volumes dynamically when PersistentVolumeClaim claims it.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: storage-class-name
provisioner: kubernetes.io/aws-ebs
parameters:
type: io1
iopsPerGB: "10"
fsType: ext4
In PVC config
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: storage-class-name
Stateful application: database, application that stores data, deployed using StatefulSet
Stateless application: deployed using Deployment
, replicate Pods
load balances
to any PodPod Identity
Scaling database applications
Pod state
$(statefulset name)-$(ordinal)
mysql-0
, mysql-1
, mysql-2
, here mysql-0
is master, others are workersmysql-0
, which is same as deployment${pod name}.${governing service domain}
mysql-0.svc2
, mysql-1.svc2
, mysql-2.svc2
Sticky identity
Replicating stateful apps
Note: So stateful applications are not perfect for containerized environments
apiVersion: v1
kind: Service
metadata:
name: microservice-one-service
spec:
selector:
app: microservice-one
ports:
- protocol: TCP
port: 3200
targetPort: 3000
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-one
...
spec:
replicas: 2
...
template:
metadata:
labels:
app: microservice-one
spec:
containers:
- name: ms-one
image: my-private-repo/ms-one
ports:
- containerPort: 3000
- name: log-collector
image: my-private-repo/log-col
ports:
- containerPort: 9000
kubectl get pod -o wide
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ms-one-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: microservice-one.com
http:
paths:
- path:
backend:
serviceName: microservice-one-service
servicePort: 3200
Which Pods to forward the request to?
Service
apiVersion: v1
kind: Service
metadata:
name: microservice-one-service
spec:
selector:
app: microservice-one
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservice-one
...
spec:
replicas: 3
...
template:
metadata:
labels:
app: microservice-one
For example:
In Service yaml file,
selector:
app: my-app
type: microservice
In Pods
labels:
app: my-app
type: microservice
Then service matches all replicas of pods in deployments
targetPort
Which port to forwards to
The spec:ports:targetPort
in service yaml file to be used
apiVersion: v1
kind: Service
metadata:
name: microservice-one-service
spec:
selector:
app: microservice-one
ports:
- protocol: TCP
port: 3200
targetPort: 3000
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubenetes 172.104.231.137:6443 15m
mongodb-service 10.2.1.4:27017,10.2.1.5:27017 5m27s
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- name: mongodb
protocol: TCP
port: 27017
targetPort: 27017
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- name: mongodb
protocol: TCP
port: 27017
targetPort: 27017
- name: mongodb-exporter
protocol: TCP
port: 9216
targetPort: 9216
The ports must be named.
Set spec:clusterIP
to None
One solution
For example,
apiVersion: v1
kind: Service
metadata:
name: mongodb-service-headless
spec:
clusterIP: None
selector:
app: mongodb
ports:
- name: mongodb
protocol: TCP
port: 27017
targetPort: 27017
In stateful application, both ClusterIP and Headless services are used together
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.128.0.1 <none> 443/TCP 20m
mongodb-service ClusterIP 10.128.204.105 <none> 27017/TCP 10m
mongodb-service-headless ClusterIP None <none> 27017/TCP 2m8s
For ClusterIP service, ClusterIP only accessible within cluster, the external traffic can only access via Ingress.
External => Ingress => ( ClusterIP Service => POD nodes ) == Worker Node
For NodePort service, external traffic has access to fixed port on each Worker Node.
External => ( NodePort => ClusterIP Service => POD nodes ) == Worker Node
apiVersion: v1
kind: Service
metadata:
name: ms-service-nodeport
spec:
type: NodePort
selector:
app: microservice-one
ports:
- protocol: TCP
port: 3200
targetPort: 3000
nodePort: 30008
ip-address of Worker Node
and nodePort
For example,
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.128.0.1 <none> 443/TCP 20m
mongodb-service ClusterIP 10.128.204.105 <none> 27017/TCP 10m
mongodb-service-headless ClusterIP None <none> 27017/TCP 2m8s
ms-service-nodeport NodePort 10.128.202.9 <none> 3200:30008/TCP 8s
cluster-ip:3200
node-ip:30008
ClusterIP service is accessible externally through cloud providers LoadBalancer.
NodePort and ClusterIP Service are created automatically!
apiVersion: v1
kind: Service
metadata:
name: ms-service-loadbalancer
spec:
type: LoadBalancer
selector:
app: microservice-one
ports:
- protocol: TCP
port: 3200
targetPort: 3000
nodePort: 30010
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.128.0.1 <none> 443/TCP
mongodb-service ClusterIP 10.128.204.105 <none> 27017/TCP
mongodb-service-headless ClusterIP None <none> 27017/TCP
ms-service-loadbalancer ClusterIP 10.128.233.22 172.104.255.5 3200:30010/TCP
ms-service-nodeport NodePort 10.128.202.9 <none> 3200:30008/TCP
跑步是否会损伤膝关节?正确的跑步姿势是怎样的?20220306 |《实验现场》CCTV科教
Is Running Actually Bad For Your Knees?
yum update -y
yum install epel-release -y
yum install ansible -y
Add following line in /etc/ansible/hosts
[linux]
45.56.72.153
45.79.56.223
[linux:vars]
ansible_user=root
ansible_password=P@ssword123
Edit ansible.cfg
file as below
host_key_checking = false
ansible linux -m ping
ansible linux -a "cat /etc/os-release"
ansible linux -a "reboot"
Playbook => Plays => Tasks
For example, following yaml file iluvnano.yml
---
- name: iluvnano
host: linux
tasks:
- name: ensure nano is there
yum:
name: nano
state: latest
The ilvunano
is Play, the ensure nano is there
is Task.
Run playbook
ansible-playbook iluvnano.yml
If change the state to absent
in iluvnano.yml
and rerun above command, the nano will be removed.
In /etc/ansible/hosts
file
[routers]
ios-xe-mgmt-latest.cisco.com
ios-xe-mgmt.cisco.com
[routes:vars]
ansible_user=developer
ansible_password=C1sco12345
ansible_connection=network_cli
ansible_network_os=ios
ansible_port=8181
In /etc/ansible/ansible.cfg
host_key_checking = false
Run commands
ansible routers -m ping
ansible routers -m ios_command -a "commands='show ip int brief'"
Playbook devnet.yml
---
- name: General Config
hosts: routers
tasks:
- name: Add Banner
ios_banner:
banner: login
text: |
Nicolas Cage is the
Tiger King
state: present
- name: Add loopback
ios_interface:
name: Loopback21
state: present
Run following command
ansible-playbook devnet.yml
To remove the changes, update the state
in yaml file
state: absent
Run following command again
ansible-playbook devnet.yml
Then the routers will be modified.
show ip int brief
show run | beg banner
you need to learn Ansible RIGHT NOW!! (Linux Automation)
get started with Ansible Network Automation (FREE cisco router lab)
The cockpit service in CentOS is a web-based graphical interface for server administration.
systemctl enable --now cockpit.socket
Use browser URL https://<server_address>:9090/
.
After CentOS 8 boots up, following error appeared when starting systemd-modules-load.service
in /var/log/messages
Mar 1 15:48:40 centos kernel: ipmi_si: IPMI System Interface driver
Mar 1 15:48:40 centos kernel: ipmi_si: Unable to find any System Interface(s)
Mar 1 15:48:40 centos systemd-modules-load[561]: Failed to insert 'ipmi_si': No such device
The module ipmi_si
is designed for physical servers with a remote control interface - an IPMI, and the CentOS 8 is running in a VM.
Create /etc/modprobe.d/blacklist-ipmi.conf
file with following lines,
blacklist ipmi_si
blacklist ipmi_devintf
blacklist ipmi_msghandler
blacklist ipmi_ssif
blacklist ipmi_watchdog
blacklist ipmi_poweroff
blacklist acpi_ipmi
blacklist ibmaem
blacklist ibmpex
There is no option in network configuration GUI menu for search domain. Following steps can be used to add search domain.
/etc/sysconfig/network-scripts/ifcfg-<interface_name>
SEARCH=<search_domain>
systemctl restart NetworkManager
How to configure static DNS and Search domain for Redhat / CentOS and Redhat Linux
Currently CentOS 9 doesn't support KVM or QEMU CPU type. Use host
as CPU type instead.
In Proxmox web interface, there is no option to convert back template to VM. In order to do this, use Shell session, delete template: 1
line in the configuration file under /etc/pve/qemu-server/<vm_id>.conf
file.
When list docker containers using docker ps -a
command, there is nothing in the list, but using systemctl status docker
can see running processes. And the docker container services are running in the servers.
The issue was fixed by removing docker and lxc from snap.