Tag: kubernetes

Learning – Prometheus Exporter

Learning - Prometheus Exporter

Steps to monitor MongoDB metrics

  • Deploy MongoDB App
  • Deploy MongoDB Exporter
  • Deploy ServiceMonitor

Deployment

minikube start --cpus 4 --memory 8192 --vm-driver hyperkit
helm ls
kubectl get pod
kubectl get svc
kubectl port-forward prometheus-kube-prometheus-prometheus 9090
kubectl port-forward prometheus-grafana 80

servicemonitor

ServiceMonitor is a custom Kubernetes component

kubectl get servicemonitor
kubectl get servicemonitor prometheus-kube-prometheus-grafana -oyaml
...
metadata:
  labels:
    release: prometheus

spec:
  endpoints:
    - path: /metrics
      port: service
  selector:
    matchLabels:
      app.kubernetes.io/instance: prometheus
      app.kubernetes.io/name: grafana

CRD configuration

$ kubectl get crd
...
prometheuses.monitoring.coreos.com ...
...
$ kubectl get prometheuses.monitoring.coreos.com -oyaml
...
spec:
  serviceMonitorSelector:
    matchLabels:
      release: prometheus
...

Deploy MongoDB

mongodb-without-exporter.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017
kubectl apply -f mongodb-without-exporter.yaml
kubectl get pod

Deploy MongoDB Exporter

Translator between apps data to Prometheus understandable metrics

Target (MongoDB App) <= fetches metrics <= converts to correct format <= expose /metrics <= Prometheus Server

  • Separate deployment - No need to change config files

MongoDB exporter (mongodb-exporter) can be downloaded from exporter site or dockerhub.

Exporter Site

Exporters can be downloaded from https://prometheus.io/docs/instrumenting/exporters

Nodes exporter - translates metrics of cluster Nodes, exposes /metrics

prometheus-prometheus-node-exporter-8qvwn

Components for exporter

  • Exporter application - exposes /metrics endpoint
  • Service - for connnecting to the exporter
  • ServiceMonitor - to be discovered

Helm chart for exporter

Search for mongodb-exporter helm chart

https://github.com/prometheus-community/helm-charts

Override values using chart parameters

helm show values <chart-name>

Add Helm repo

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm show values prometheus-community/prometheus-mongodb-exporter > values.yaml

Override values in values.yaml

mongodb:
  uri: "mongodb://mongodb-service:27017"

serviceMonitor:
  additionalLabels:
    release: prometheus

with the label Prometheus automatically discovers a new ServiceMonitor in the cluster

$ helm install mongodb-exporter prometheus-community/prometheus-mongodb-exporter -f values.yaml
...
$ helm ls
mongodb-exporter
...
$ kubectl get pod
...
mongodb-exporter-prometheus-mongodb-exporter-75...
...
$ kubectl get svc
...
mongodb-exporter-prometheus-mongodb-exporter
...
$ kubectl get servicemonitor
...
mongodb-exporter-prometheus-mongodb-exporter
...
$ kubectl get servicemonitor mongodb-exporter-prometheus-mongodb-exporter -o yaml
...
metadata:
  labels:
    release: prometheus
...

Check endpoint /metrics

$ kubectl get svc
...
mongodb-exporter-prometheus-mongodb-exporter
...
kubectl port-forward service/mongodb-exporter-prometheus-mongodb-exporter 9216

Access https://127.0.0.1:9216/metrics

The mongodb-exporter is added as targets in prometheus, because the label release: prometheus is set and auto discovered.

MongoDB metrics data in Grafana UI

kubectl get deployment
kubectl port-forward deployment/prometheus-grafana 3000

References

Prometheus Monitoring - Steps to monitor third-party apps using Prometheus Exporter | Part 2

Learning – Setup Prometheus Monitoring on Kubernetes

Learning - Setup Prometheus Monitoring on Kubernetes

Prometheus Server

  • Data Retrieval Worker - Retrieval - pull metrics data
  • Time Series Database - Storage - stores metrics data
  • Accepts PromQL queries - HTTP Server - accepts queries

Alertmanager

Prometheus Server => push alerts => Alertmanager => Email, Slack, etc.

Prometheus UI

  • Prometheus Web UI

  • Grafana, etc.

  • Visualize the scraped data in UI

Deployment

How to deploy the different parts in Kubernetes cluster?

  • Creating all configuration YAML files yourself and execute them in right order

    • inefficient
    • lot of effort
  • Using an operator

    • Manager of all Prometheus components
    • Find Prometheus operator
    • Deploy in K8s cluster
  • Using Helm chart to deploy operator

    • maintained by Helm community
    • Helm: initial setup
    • Operator: manage setup

Setup with Helm chart

  • Clean Minikube state
$ kubectl get pod
$ helm install prometheus stable/prometheus-operator
$ kubectl get pod
NAME ...
alertmanager-prometheus-prometheus-oper-alertmanager-0
prometheus-grafana-67...
prometheus-kube-status-metrics-c6...
prometheus-prometheus-node-explorter-jr...
prometheus-prometheus-oper-operator-78...
prometheus-prometheus-prometheus-oper-prometheus-0...

Prometheus Components

kubectl get all

2 Statfulset

Prometheus Server

statefulset.apps/prometheus-prometheus-prometheus-oper-prometheus

Alertmanager

statefulset.apps/alertmanager-prometheus-prometheus-oper-alertmanager

3 Deployments

Prometheus Operator - created Prometheus and Alertmanager StatefulSet

deployment.apps/prometheus-prometheus-oper-operator

Grafana

deployment.apps/prometheus-grafana

Kube State Metrics

deployment.apps/prometheus-kube-state-metrics
  • own Helm chart
  • dependency of this Helm chart
  • scrapes K8s components - K8s infrastructure monitoring

3 StatefulSets

Created by Deployment

replicaset.apps/prometheus-prometheus-oper-operator...
replicaset.apps/prometheus-grafana...
replicaset.apps/prometheus-kube-state-metrics...

1 DaemonSet

  • Node Exporter DaemonSet
daemonset.apps/prometheus-prometheus-node-exporter

DaemonSet runs on every Worker Node

  • connects to Server
  • translates Worker Node metrics to Prometheus metrics - CPU usage, load on server

Completed tasks

  • Monitoring Stack
  • Configuration for your K8s cluster
  • Worker Nodes monitored
  • K8s components monitored

ConfigMaps

kubectl get configmap
  • configurations for different parts
  • managed by operator
  • how to connect to default metrics

Secrets

kubectl get secret
  • for Grafana

  • for Prometheus

  • for Operator

  • certificates

  • username & passwords
    ...

CRDs

kubectl get crd

extension of Kubernetes API

  • custom resource definitions

Describe components

kubectl describe = container/image information

kubectl get statefulset
kubectl describe statefulset prometheus-prometheus-prometheus-oper-prometheus > prom.yaml
kubectl describe statefulset alertmanager-prometheus-prometheus-oper-alertmanager > alert.yaml
kubectl get deployment
kubectl describe deployment prometheus-prometheus-oper-operator > oper.yaml

Stateful oper-prometheus

Containers:

  • prometheus
    • Images: quay.io/prometheus/prometheus:v2.18.1
    • Port: 9090/TCP
    • Mounts: where Prometheus gets its configuration data mounted into Prometheus Pod
    • /etc/prometheus/certs
    • /etc/prometheus/config_out
    • /etc/prometheus/rules/...
    • /prometheus
      They are
    • Configuration file: what endpoints to scrape
    • address of applications: expose /metrics
    • Rules configuration file: alerting rules, etc.

The two sidecar/help container *-reloader, they help reloading, responsible for reloading, when configuration files changes.

  • prometheus-config-reloader

    • Image: quay.io/coreos/prometheus-config-reloader:v0.38.1
    • reloader-url: http://127.0.0.1:9090/-/reload
    • config-file: /etc/prometheus/config/prometheus.yaml.gz
  • rules-configmap-reloader

ConfigMap and Secret (States):

kubectl get configmap
kubectl get secret

In prom.yaml,

  • Args: --config-file=/etc/promtheus/config
  • Mounts:
    • /etc/prometheus/config from config
    • /etc/prometheus/config_out from config_out
  • Volumes: config, it is a secret
kubectl get secret prometheus-prometheus-prometheus-oper-prometheus -o yaml > secret.yaml
apiVersion: v1
data:
  prometheus.yaml.gz: ....

In rules file rules-configmap-reloader

Mounts: /etc/prometheus/rules/prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0 from prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0

Volumes: ConfigMap prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0

kubectl get configmap prometheus-prometheus-prometheus-oper-prometheus-rulefiles-0 -o yaml > config.yaml
  • config.yaml rules file
apiVersion: v1
data:
  default-prometheus-prometheus-oper-alertmanager.rules.yaml
  groups:
    - name: alertmanager.rules
      rules:
      - alert: AlertmanagerConfigInconsistent
...

Stateful alertmanager

Containers:

  • alertmanager

    • Image: quay.io/prometheus/alertmanager:v0.20.0
    • config.file: /etc/alertmanager/config/alertmanager.yaml
  • config-reloader

    • Image: `docker.io/jimmidyson/configmap-reload:v0.3.0

Operator permetheus-operator

Containers:

  • prometheus-operator (orchestrator of monitoring stack)

    • Image: quay.io/coreos/prometheus-operator:v0.38.1
  • tls-proxy

Tasks

  • How to add/adjust alert rules?

  • How to adjust Prometheus configuration?

Access Grafana

$ kubectl get service
...
prometheus-grafana   ClusterIP ...

ClusterIP = Internal Services

$ kubectl get deployment
...
prometheus-grafana
...

$ kubectl get pod
...
prometheus-grafana-67....
...

$ kubectl logs prometheus-grafana-67... -c grafana
...
... user=admin
...
... address=[::]:3000 ...
...

port: 300
default user: admin

$ kubectl port-forward deployment/prometheus-grafana 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

Then the grafana can be accessed via https://localhost:3000

The default admin password is "prom-operator", which can be found in chart: https://github.com/heim/charts/tree/master/stable/prometheus-operator#...

$ kubectl get pod
...
prometheus-kube-state-metrics-c6...
prometheus-prometheus-node-exporter-jr...
...

Prometheus UI

$ kubectl get pod
...
prometheus-prometheus-prometheus-oper-prometheus-0
...

$ kubectl port-forward prometheus-prometheus-prometheus-oper-prometheus-0 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090

Then Prometheus UI can be accessed via https://localhost:9090/.

Summarize

  • Deployed Prometheus stack using Helm
    • easy deployment process
  • Overview of what these different components are and do
  • Configure additional metrics endpoint

References

Setup Prometheus Monitoring on Kubernetes using Helm and Prometheus Operator | Part 1

Learning – Kubernetes Operator

Learning - Kubernetes Operator

Used for Stateful Applications on K8s

Stateless Applications on K8s

Control loop

Observe => Check Differences => Take Action => Observe ...

  • Recreate died pods
  • restart updated pods

Stateful Applications WITHOUT Operator

Data Persistence

  • more "hand-holding" needed

  • throughout whole lifecycle

  • all 3 replicas are different

  • own state and identity

  • order important

  • Process different for each application

  • So, no standard solution

  • manual intervention necessary

  • people, who "operate" these applications

  • can not archive automation, self-healing

Stateful application WITH Operator

To manage stateful application

Replaces human operator with software operator.

  • How to deploy the app?

  • How to create cluster of replicas?

  • How to recover?

  • tasks are automated and reusable

  • One 1 standard automated process

  • more complex/more environments => more benefits

Control loop mechanism

watch for changes

Observe => Check Differences => Take Action => Observe ...

It is custom control loop

make use of CRD's

Custom Resource Definitions

  • custom K8s component (extends K8s API)

Your own custom component

domain/app-specific knowledge

CRD's, StatefulSet, ConfigMap, Service, ...

automates entire lifecycle of the app it operates

Summary

  • Managing complete lifecycle of stateless apps
    No business logic necessary to: create, update, delete
  • K8s can't automate the process natively for stateful apps
    Operators: prometheus-operator, mysql-operator, postgres-operator, elastic-operator

For example: MySQL

  • How to create the mysql cluster
  • How to run it
  • How to synchronize the data
  • How to update

OperatorHub.io

Operator SDK to create own operator

References

Kubernetes Operator simply explained in 10 mins

Learning – Kubernetes

Learning - Kubernetes

Components

  • Pod

    • Smallest unit of K8s
    • Abstraction over container
    • Usually 1 application per Pod
    • Each Pod gets its own IP address
    • New IP address on re-creation
  • Service

  • Ingress

  • Deployment

    • blueprint for my-app pods
    • create deployments
    • abstraction of Pods
    • for stateLess Apps
  • StatefulSet

    • For share storage
    • for stateFUL apps or databases
  • Volumes

    • Local
    • Remote
  • Secrets

  • ConfigMap

  • Nodes

Nodes

Worker

  • Container runtime
  • Kubelet
    • interacts with both the container and node
    • starts the pod with a container inside
  • Kube Proxy - forwards the requests

Master

Functions

  • Schedule pod
  • Monitor
  • Re-schedule/re-start pod
  • Join a new Node

Processes

  • Api Server

    • cluster gateway
    • acts as a gatekeeper for authentication
  • Scheduler

    • Decides on which Node new Pod should be scheduled
  • Controller manager

    • detects cluster state changes
  • etcd

    • is the cluster brain, Key Value Store

Minikube

1 Node K8s cluster

Kubectl - CLI

Install on Mac

brew update
brew install hyperkit
brew install minikube
kubectl

Create cluster

minikube start --vm-driver=hyperkit
kubectl get nodes
minikube status
kubectl version
kubectl get services
kubectl get pod

Create deployment

kubectl create deployment NAME --image=image [--dry-run] [options]
kubectl create deployment nginx-depl --image=nginx
kubectl get deployment
kubectl get replicaset
kubectl get pod

Change deployment

For example, change version of image

kubectl edit deployment nginx-depl

Then change the version of image. To show pods actions, run following commands

kubectl get pod
kubectl get replicaset

Old one has been deleted, new one has been created.

Check logs

kubectl logs nginx-depl-66859c8f65-vfjjk

Create mongodb deployment

kubectl create deployment mongo-depl --image=mongo
kubectl get pod
kubectl logs mongo-depl-67f895857c-fkspm
kubectl describe pod mongo-depl-67f895857c-fkspm

Debug

Run shell in pod

kubectl exec -it mongo-depl-67f895857c-fkspm -- bin/bash

Delete deployment

kubectl delete deployment mongo-depl
kubectl get pod
kubectl get replicaset

Configuration file

Deployment

Create configuration file called nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicase: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.16
        ports:
        - containerPort: 8080

Note: The first spec is for deployment, the inner spec is for pod.

Apply configuration

kubectl apply -f nginx-deployment.yaml
kubectl get pod
kubectl get deployment

Change deployment can be done by editing deployment file and apply again.

For service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - portocol: TCP
      port: 80
      targetPort: 8080

3 parts of configuration

  • metadata
    • labels: key/value pairs
  • specification
    • selectors - matchLables: defined which labels to be matched
  • status
    • Kubernetes compares desired state and actual state, and find out the difference
    • Stored in etcd

Note: Can use YAML data validator to validate the YAML file.

Nested configuration

In previous example, the pod configuration is in deployment configuration under spec, and named as template

Labels

In deployment file

  • Pod label: template label

  • Selector matchLabels: tell the deployment to connect or match all the labels to create the connection

  • Deployment label: Used by service selector

In service file

  • Selector: connect to labels in the deployment and the pod

Ports

In deployment file, define the ports of pods

In service file, connect to the ports of pods

For example: DB Service -> port: 80 -> nginx Service -> targetPort:8080 -> Pod

Create both deployment and services

kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
kubectl get pod
kubectl get service
kubectl describe service nginx-service

The Endports are the ports that the service must forward to, which can be found using -o wide option

kubectl get pod -o wide

To get deployment status in ectd

kubectl get deployment nginx-deployment -o yaml

Delete deployment

kubectl delete -f nginx-service.yaml

MongoDB and Mongo Express Example

  • MongoDB - Internal Service
  • MongoExpress - External Service

Minicube

Check all components

kubectl get all

Secret configuration

apiVersion: v1
kind: Secret
metadata:
  name: mongodb-secret
type: Opaque
data:
  mongo-root-username: dXNlcm5hbWU=
  mongo-root-password: cGFzc3dvcmQ=

To generate the base64 string for username and password

echo -n 'username' | base64
kubectl apply -f mongodb-secret.yaml
kubectl get secret

mongodb deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicase: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-username
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-password
kubectl apply -f mongo.yaml
kubectl get all
kubectl get pod
kubectl get pod --watch
kubectl describe pod mongodb-deployment-78444d94d6-zsrcl

Internal service

*Note: If want to put multiple YAML files into one, put --- in front of new file

Create service YAML in mongodb.yaml file as they belong together


...

---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - portocol: TCP
      port: 27017
      targetPort: 27017
kubectl apply -f mongo.yaml
kubectl get service
kubectl describe service mongodb-service
kubectl get pod -o wide

Display service, deployment, replicaset and pod

kubectl get all | grep mongodb

ConfigMap

Create a file called mongo-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-configmap
data:
  database_url: mongodb-service

Note: The database_url is the service name, which is only the value of it. How to use it is depending on the application.

Mongo Express

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-express
  labels:
    app: mongo-express
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo-express
  template:
    metadata:
      labels:
        app: mongo-express
    spec:
      containers:
      - name: mongo-express
        image: mongo-express
        ports:
        - containerPort: 8081
        env:
        - name: ME_CONFIG_MONGODB_ADMINUSERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-username
        - name: ME_CONFIG_MONGODB_ADMINPASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-password
        - name: ME_CONFIG_MONGODB_SERVER
          valueFrom:
            configMapKeyRef:
              name: mongodb-configmap
              key: database_url
kubectl apply -f mongo-configmap.yaml
kubectl apply -f mongo-express.yaml
kubectl get pod
kubectl get logs mongo-express-797845bd97-p9grr

External service

Append following configuration behind mongo-express.yaml file

...

---
apiVersion: v1
kind: Service
metadata:
  name: mongo-express-ervice
spec:
  selector:
    app: mongo-express
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30000

Note: Set type as LoadBalancer to define external service, and set nodePort between 30000-32767

kubectl appy -f mongo-express.yaml
kubectl get service

Note: The external services are shown as LoadBalancer, internal services are defined as ClusterIP which is DEFAULT.

Assign Public IP address in minikube

minikube service mongo-express-service

Namespace

get

kubectl get namespace

4 default namespaces

kube-system

  • Do NOT create or modify in kube-system
  • System processes
  • Ma

kube-public

  • publicely accessiable data
  • A configmap, which contains cluster information
kubectl cluster-info

kube-node-lease

  • heartbeats of nodes
  • each node has associated lease object in namespace
  • determines the availability of a node

default

  • resources you create are located here

create

kubectl create namespace my-namespace
kubectl get namespace

Usage

  • Structure your components
  • Avoid conflicts between teams
  • Share services between different environments
  • Access and Resource Limits on Namespaces Level

Project namespace (isolation)

Officially: Should not use for smaller projects

Staging and Development (shared)

Can deploy common resources into separate namespace, such as Nginx-Ingress Controller, or Elastic Stack.

Blue and Green Deployment (shared)

Different versions of deployments use common resources, such as database, Nginx-Ingress Controller or Elastic Stack.

Namespace reference

  • Secret and ConfigMap cannot be shared.
  • Service can be shared, so ConfigMap can map services in other namespaces.
  • Some resources, such as volume and node, can not be defined in namespace.
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-configmap
data:
  db_url: mysql-service.database

Here, database is the namespace.

Apply

kubectl apply -f mysql-configmap.yaml
kubectl get configmap
kubectl get configmap -n default

This configmap is created in default namespace.

kubectl apply -f mysql-configmap.yaml --namespace=my-namespace
kubectl get configmap -n my-namespace

This configmap is created in my-namespace namespace.

or

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-configmap
  namespace: my-namespace
data:
  db_url: mysql-service.database

List cluster resource

Some resources can not be created a Namespace level, such as volume, node.

kubectl api-resources --namespaced=false
kubectl api-resources --namespaced=true

Change the active namespace with kubens

brew install kubectx
kubens
kubens my-namespace

This will change the default behavior of namespace from default namespace to my-namespace

Ingress

Normal practice is

browser -> entrypoint -> Ingress Controller -> Internal services

External Service

apiVersion: v1
kind: Service
metadata:
  name: myapp-external-service
spec:
  selector:
    app: myapp
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 8080
    nodeProt:35010

Ingress

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: myapp-ingress
spec:
  rules:
  - host: myapp.com
    http:
      paths:
      - backend:
          serviceName: myapp-internal-service
          servicePort: 8080
  • rules is the routing rules
  • host is the host specified in browser
  • paths is the path in URL after the host
  • serviceName is the backend service name
  • http is the internal communication, not for the external service

Example of internal service:

apiVersion: v1
kind: Service
metadata:
  name: myapp-internal-service
spec:
  selector:
    app: myapp
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

For external service vs internal service

  • No nodePort in internal service
  • Instead of Loadbalancer, default type: ClusterIP

Host in Ingress

  • myapp.com should be a vaild domain address
  • map domain name to Node's IP address, which is the entrypoint

The entrypoint can be one of the node in k8s cluster or the ingress server outside the k8s cluster.

Ingress Controller

Can be Ingress Controller Pod, evaluates and processes Ingress rules

  • evaluates all the rules
  • manages redirections
  • entrypoint to cluster
  • many third-party implementations
  • K8s Nginx Ingress Controller

Entrypoint

  • Cloud Load Balancer
  • External Proxy Server
    • separate server
    • public IP address and open ports
    • entrypoint to cluster

Sample of Ingress Controller in Minikube

Install

Automatically starts the K8s Nginx implementation of Ingress Controller

minikube addons enable ingress
kubectl get pod -n kube-system

Following port will be running

nginx-ingress-controller-xxxx

Create ingress rules

kubectl get ns

For example, configure to access kubernetes-dashboard from external

dashboard-ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: dashboard.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

This is to divert all requests to dashboard.com to backend service kubernetes-dashboard at port 80

Note: Updated version is as below

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: dashboard.com
    http:
      paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: kubernetes-dashboard
              port:
                number: 443
kubectl apply -f dashboard-ingress.yaml
kubectl get ingress -n kubernetes-dashboard
kubectl get ingress -n kubernetes-dashboard --watch

Define dashboard.com in /etc/hosts

192.168.64.5  dashboard.com

Default backend

Default backend: Whenever the request come to cluster that is not mapped to any backend service, no rule to map to any backend service, then this default backend is to handle those request. This is the default response, such as file not found response, or redirect to some other service.

$ kubectl describe ingress dashboard-ingress -n kubernetes-dashboard
...
Default backend: default-http-backend:80 (<none>)
...

To configure default backend, just need to do is create an internal service with same name as default-http-backendand port 80for custom message response.

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
spec:
  selector:
    app: default-response-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Multiple paths for same host

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: myapp.com
    http:
      paths:
      - path: /analytics
        backend:
          serviceName: analytics-service
          servicePort: 3000
      - path: /shopping
        backend:
          serviceName: shopping-service
          servicePort: 8080

Multiple sub-domains or domain for same host

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: analytics.myapp.com
    http:
      paths:
        backend:
          serviceName: analytics-service
          servicePort: 3000
  - host: shopping.myapp.com
    http:
      paths:
        backend:
          serviceName: shopping-service
          servicePort: 8080

Configuring TLS Certificate - https

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - host: myapp.com
  secretName: myapp-secret-tls
rules:
  - host: myapp.com
    http:
      paths:
      - path: /
        backend:
          serviceName: myapp-internal-service
          servicePort: 8080
apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret-tls
  namespace: default
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
type: kubernetes.io/tls

Note:

  • Data keys need to be "tls.crt" and "tls.key"
  • Values are file contents, NOT file paths/locations
  • Secret component must be in the same namespace as the Ingress component

Helm

Package Manager for Kubernetes: To package YAML files and distribute them in public and private repositories

For example: Elastic Stack for Logging

  • Stateful Set
  • ConfigMap
  • K8s User with permissions
  • Secret
  • Services

Helm Charts

  • Bundle of YAML Files: All above configuration YAML files are bundled into Helm Chart
  • Create your own Helm Charts with Helm
  • Push them to Helm Repository
  • Download and use existing ones

Such as

  • Database Apps
    • MongoDB
    • Elasticsearch
    • MySQL
  • Monitoring Apps
    • Promotheus

Search using following commands or Helm Hub

helm search <keyword>

Public / Private Registries

Templating Engine

  • Define a common blueprint
  • Dynamic values are replaced by placeholders
apiVersion: v1
kind: Pod
metadata:
  name: {{ .Values.name }}
spec:
  containers:
  - name: {{ .Values.container.name }}
    image: {{ .Values.container.image }}
    port: {{ .Values.container.port }}

The values are from values.yaml

name: my-app
container:
  name: my-app-container
  image: my-app-image
  port: 9001

Here, the .Value is an object, which is created based on the values defined.

Values defined either via yaml file or with --set flag.

Usage

  • Practical for CI /CD: In your Build you can replace the values on the fly.
  • Deply same application across different environments, such as development/staging/production environments.

Structure

mychart/
  Chart.yaml
  values.yaml
  charts/
  templates/
  • mychart/ folder is the name of chart as well
  • Chart.yaml has the meta information about chart, such as name dependencies, version
  • values.yaml has vaules for the template files
  • charts/ is the chart dependencies
  • templates/ folder is the actual template files

Commands

helm install <chartname>

Override the default value in values.yaml

The final values will be saved in .Values object

  • Using command line --values option
helm install --values=my-values.yaml <chartname>

For example, the my-values.yaml file can override vesrions value.

  • Using command line --set option
helm install --set version=2.2.0

Release management

Tiller Helm Version 2

  • Install

With server called Tiller. The client run following install command, will send requests to Tiller, that actually runs in a Kubernetes cluster.

helm install <chartname>

Whenever create or change deployment, Tiller will store a copy of configuration for release management.

  • Upgrade

When run upgrade command below, the changes are applied to existing deployment instead of creating a new one.

helm upgrade <chartname>
  • Rollback

Also can handle rollbacks

helm rollback <chartname>
  • Downsides

  • Tiller has too much power inside of K8s cluster

  • Security Issue

  • Solves the security Concern

In Helm 3, Tiller got removed.

Volumes

Storage requirements

  • Storage that doesn't depend on the pod lifecycle.
  • Storage must be available on all nodes.
  • Storage needs to survive even if cluster crashes.

Persistent Volume

  • a cluster resource

  • created via YAML file

    • kind: PersistentVolume
    • spec: e.g. how much storage?
  • What Type of storage do you need?

  • You need to create and manage them by yourself

Sample of NFS pv

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-name
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.0
  nfs:
    path: /dir/path/on/nfs/server
    server: nfs-server-ip-address

Sample of Google Cloud

apiVersion: v1
kind: PersistenVolume
metadata:
  name: test-volume
  labels:
    failure-domain.beta.kubernetes.io/zone: us-central1-a__us-centrall-b
spec:
  capacity:
    storage: 400Gi
  accessModes:
  - ReadWriteOnce
  gcePersistentDisk:
    pdName: my-data-disk
    fsType: ext4

Note: The gcePersistentDisk is the Google Cloud parameters

Sample of local storage

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: local-storage
  local:
    path: /mnt/disks/ssd1
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
            values:
            - example-node
  • PV outside of the namespaces
  • Accessible to the whole cluster

Local vs. Remote Volume Types

Local volumes should not be used as PV

  • Being tied to 1 specific node
  • Surviving cluster crashes

K8s Administrator and K8s User

  • K8s Admin sets up and maintains the cluster, and make sure has enough resource.

  • K8s User deploys application in cluster

Persistent Volume Claim

  • Application has to claim the Persistent Volume

Define a PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-name
spec:
  storageClassName: manual
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Use that PVC in Pods configuration

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
     - name: myfrontend
       image: nginx
       volumeMounts:
       - mountPath: "/var/www/html"
         name: mypod
   volumes:
     - name: mypd
       persistentVoumeClaim:
         claimName: pvc-name

PVC must be in the same namespace.

The advantage of having separate PV and PVC is to abstract the usage of volume which doesn't need to know the actual storage location and it's type, easier for developers.

ConfigMap and Secret

  • They are local volumes
  • They are not created via PV and PVC
  • They are managed by kubernetes itself

This can be done by

  • Create ConfigMap and/or Secret component
  • Mount that into your pod/container

Different volume type

Can configure different volumes with different types in pod

appVersion: v1
kind: Deployment
metadata:
  name: elastic
spec:
  selector:
    matchLabels:
      app: elastic
  template:
    metadata:
      labels:
        app: elastic
    spec:
      containers:
      - image: elastic:latest
        name: elastic-container
        ports:
        - containerPort: 9200
        volumeMounts:
        - name: es-persistent-storage
          mountPath: /var/lib/data
        - name: es-secret-dir
          mountPath: /var/lib/secret
        - name: es-config-dir
          mountPath: /var/lib/config
      volumes
      - name: es-persistent-storage
        persistentVolumeClaim:
          claimName: es-pv-claim
      - name: es-secret-dir
        secret:
          secretName: es-secret
      - name: es-config-dir
        configMap:
          name: es-config-map

Storage Class

Storage Class provisions Persistent Volumes dynamically when PersistentVolumeClaim claims it.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-class-name
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io1
  iopsPerGB: "10"
  fsType: ext4

StorageBackend is defined in the SC component

  • via "provisioner" attribute
  • each storage backend has own provisioner
  • internal provisioner - "kubernetes.io"
  • external provisioner
  • configure parameters for storage we want to request for PV

Another abstraction level

  • abstracts underlying storage provider
  • parameters for that storage

Storage class usage

In PVC config

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Gi
  storageClassName: storage-class-name

StatefulSet

Stateful application: database, application that stores data, deployed using StatefulSet

Stateless application: deployed using Deployment, replicate Pods

Differences

Replicating stateful application is more difficult

  • Replicate stateless application
    • identical and interchangeable
    • created in random order with random hashes
    • one Service that load balances to any Pod
  • Replicate stateful application
    • can't be created/delete at same time
    • can't be randomly addressed
    • replica Pods are not identical
    • Pod Identity

Pod Identity

  • sticky identity for each pod
  • create from same specification, but not interchangeable
  • persistent identifier across any re-scheduling, when old pod replace by new pod, identity remains

Scaling database applications

  • Reading from all pods
  • Writing from one pod only (Master)
  • Continuously synchronizing of the data from Master to Workers
  • Cluster database setup is required for synchronization
  • The new worker always clones the data from PREVIOUS pod, not from a random pod
  • Temporary storage (non persistent storage) theoretically used by stateful set is possible
    • only replicate data without persistent storage
    • data will be lost when all Pods die
  • Persistent storage should be configured for stateful set
    • Persistent Volume lifecycle isn't tied to other component's lifecycle

Pod state

  • Pod state saves information about pod, such as whether it is master or not, etc.
  • Pod state storage must be shared for all pods.
  • StatefulSet has fixed ordered names, $(statefulset name)-$(ordinal)
    • Pods mysql-0, mysql-1, mysql-2, here mysql-0 is master, others are workers
    • Next Pod is only created if previous is up and running
    • Delete StatefulSet or scale down to 1 replica, deletion in reverse order, starting from the last one
  • DNS includes
    • loadbalancer service mysql-0, which is same as deployment
    • individual service name, ${pod name}.${governing service domain}
    • mysql-0.svc2, mysql-1.svc2, mysql-2.svc2
    • predictable pod name
    • fixed individual DNS name
  • Restarts
    • IP address changes
    • name and endpoint stay same

Sticky identity

  • retain state
  • retain role

Replicating stateful apps

  • User need to do
    • Configuring the cloning and data synchronization
    • Make remote storage available
    • Managing and backup

Note: So stateful applications are not perfect for containerized environments

Kubernetes Services

Types

  • ClusterIP Services
  • Headless Services
  • NodePort Services
  • LoadBalancer Services

What is a Service

  • Each Pod has its own IP address
    • Pods are ephemeral - are destoryed frequently!
  • Service:
    • stable IP address
    • loadbalancing
    • loose coupling
    • within & outside cluster

ClusterIP

  • Default type
apiVersion: v1
kind: Service
metadata:
  name: microservice-one-service
spec:
  selector:
    app: microservice-one
  ports:
    - protocol: TCP
      port: 3200
      targetPort: 3000

Example:

  • microservice app deployed
  • side-car container (collects microservice logs)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice-one
  ...
spec:
  replicas: 2
  ...
  template:
    metadata:
      labels:
        app: microservice-one
    spec:
      containers:
      - name: ms-one
        image: my-private-repo/ms-one
        ports:
        - containerPort: 3000
      - name: log-collector
        image: my-private-repo/log-col
        ports:
        - containerPort: 9000
  • IP address from Node's range
kubectl get pod -o wide
  • Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ms-one-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: microservice-one.com
      http:
        paths:
          - path:
            backend:
              serviceName: microservice-one-service
              servicePort: 3200

Service Communication: selector

Which Pods to forward the request to?

  • Pods are identified via selectors
  • key value pairs
  • labels of pods
  • random label names

Service

apiVersion: v1
kind: Service
metadata:
  name: microservice-one-service
spec:
  selector:
    app: microservice-one

Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice-one
  ...
spec:
  replicas: 3
  ...
  template:
    metadata:
      labels:
        app: microservice-one
  • Svc matches all 3 replica
  • registers as Endpoints
  • must match ALL the selectors

For example:

In Service yaml file,

selector:
  app: my-app
  type: microservice

In Pods

labels:
  app: my-app
  type: microservice

Then service matches all replicas of pods in deployments

targetPort

Which port to forwards to

  • Pod with multiple ports

The spec:ports:targetPort in service yaml file to be used

apiVersion: v1
kind: Service
metadata:
  name: microservice-one-service
spec:
  selector:
    app: microservice-one
  ports:
    - protocol: TCP
      port: 3200
      targetPort: 3000

Service Endpoints

  • K8s creates Endpoint object
    • same name as Service
    • keeps track of, which Pods are the members/endpoints of the Service
$ kubectl get endpoints
NAME             ENDPOINTS                      AGE
kubenetes        172.104.231.137:6443           15m
mongodb-service  10.2.1.4:27017,10.2.1.5:27017  5m27s

port vs targetPort

  • Service port is arbitrary
  • targetPort must match the port, the container is listening at

Sample of mongodb service

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - name: mongodb
      protocol: TCP
      port: 27017
      targetPort: 27017

Multi-Port Services

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - name: mongodb
      protocol: TCP
      port: 27017
      targetPort: 27017
    - name: mongodb-exporter
      protocol: TCP
      port: 9216
      targetPort: 9216

The ports must be named.

Headless Services

Set spec:clusterIP to None

  • Client wants to communicate with 1 specific Pod directly
  • Pods want to talk directly with specific Pod
  • So, not randomly selected
  • Use Case: Stateful applications, like databases
    • Pod replicas are not identical
    • Only Master is allowed to write to DB

One solution

  • Client needs to figure out IP addresses of each Pod
  • Option 1 - API call to K8s API Server (no good)
    • makes app to tied to K8s API
    • inefficient
  • Option 2 - DNS Lookup
    • DNS Lookup for Service - returns single IP address (ClusterIP)
    • Set ClusterIP to "None" - returns Pod IP address instead

For example,

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service-headless
spec:
  clusterIP: None
  selector:
    app: mongodb
  ports:
    - name: mongodb
      protocol: TCP
      port: 27017
      targetPort: 27017
  • No cluster IP address is assigned!

In stateful application, both ClusterIP and Headless services are used together

  • ClusterIP service is used for reading
  • Headless service is used for writing, data synchonization
$ kubectl get svc
NAME                      TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
kubernetes                ClusterIP  10.128.0.1      <none>       443/TCP         20m
mongodb-service           ClusterIP  10.128.204.105  <none>       27017/TCP       10m
mongodb-service-headless  ClusterIP  None            <none>       27017/TCP       2m8s

NodePort Services

For ClusterIP service, ClusterIP only accessible within cluster, the external traffic can only access via Ingress.

External => Ingress => ( ClusterIP Service => POD nodes ) == Worker Node

For NodePort service, external traffic has access to fixed port on each Worker Node.

External => ( NodePort => ClusterIP Service => POD nodes ) == Worker Node

apiVersion: v1
kind: Service
metadata:
  name: ms-service-nodeport
spec:
 type: NodePort
  selector:
    app: microservice-one
  ports:
    - protocol: TCP
      port: 3200
      targetPort: 3000
      nodePort: 30008
  • The nodePort range: 30000 - 32767
  • The NodePort service can be accessed via ip-address of Worker Node and nodePort
  • ClusterIP Service is automatically created.

For example,

$ kubectl get svc
NAME                      TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)         AGE
kubernetes                ClusterIP  10.128.0.1      <none>       443/TCP         20m
mongodb-service           ClusterIP  10.128.204.105  <none>       27017/TCP       10m
mongodb-service-headless  ClusterIP  None            <none>       27017/TCP       2m8s
ms-service-nodeport       NodePort   10.128.202.9    <none>       3200:30008/TCP  8s
  • The ClusterIP service is listening at cluster-ip:3200
  • The NodePort service is listening at node-ip:30008

LoadBalancer Services

ClusterIP service is accessible externally through cloud providers LoadBalancer.

NodePort and ClusterIP Service are created automatically!

apiVersion: v1
kind: Service
metadata:
  name: ms-service-loadbalancer
spec:
 type: LoadBalancer
  selector:
    app: microservice-one
  ports:
    - protocol: TCP
      port: 3200
      targetPort: 3000
      nodePort: 30010
  • LoadBalancer Service is an extension of NodePort Service
  • NodePort Service is an extension of ClusterIP Service
$ kubectl get svc
NAME                      TYPE       CLUSTER-IP      EXTERNAL-IP    PORT(S)
kubernetes                ClusterIP  10.128.0.1      <none>         443/TCP
mongodb-service           ClusterIP  10.128.204.105  <none>         27017/TCP
mongodb-service-headless  ClusterIP  None            <none>         27017/TCP
ms-service-loadbalancer   ClusterIP  10.128.233.22   172.104.255.5  3200:30010/TCP
ms-service-nodeport       NodePort   10.128.202.9    <none>         3200:30008/TCP
  • NodePort Service NOT for external connection
  • Configure Ingress or LoadBalancer for production environment

References

Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]

Helm Basic

Helm Basic

Installation

Script

Pros

  • The script will be in /usr/local/bin, same location as kubectl, can be run by normal user

Cons

  • No auto update
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Package Manager

Pros

  • The script will be in /usr/sbin, it is difficult to be run by normal user if path is not defined $PATH.

Cons

  • With auto update using apt
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Snap

Pros

  • No change on system configuration, such as package repo, etc.
  • Easy to remove as well
sudo snap install helm --classic

References

Helm

Kubernetes Service External IPs

Kubernetes Service External IPs

After external service created in Kubernetes, the external IPs are not assigned unless the underlying infrastructure supports the automatic creation of Load Balancers and have the respective support in Kubernetes, as is the case with the Google Cloud Platform and AWS.

In such case, all internal IPs are able to be accessed using service port. This is the same as Docker Swarm.

Minikube

Run following command to assign an external IP

minikube service <service_name>

Another one is to run minikube tunnel to assign the IP.

kubeadm

Manually assign IP using following configuration file

spec:
  type: LoadBalancer
  externalIPs:
  - 192.168.0.10

MetalLB

MetalLB hooks into your Kubernetes cluster, and provides a network load-balancer implementation.

References

Kubernetes service external ip pending
Using minikube tunnel
Ingress class
Load Balancer Service type for Kubernetes
Service Mesh - Kubernetes LoadBalancer Service External IP pending
MetalLB
Service Mesh - Build Kubernetes & Istio environment with kubeadm and MetalLB

Reset kubernetes master or work

Reset kubernetes master or work

Reinit

After run init, following error was occurred.

dial tcp 127.0.0.1:10248: connect: connection refused.

Run following command to reinit kubernetes master or worker

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo kubeadm reset

Master

sudo kubeadm init

Worker

Join cluster

kubeadm join ...

Label it as worker

kubectl label node kworker1 node-role.kubernetes.io/worker=worker

Install Network Policy Provider

Following messages are printed to create pod network.

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Install Weave Net for NetworkPolicy.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

References

Kubernetes kubeadm init fails due to dial tcp 127.0.0.1:10248: connect: connection refused
kubernetes cluster master node not ready
Weave Net for NetworkPolicy