Tag: docker

Setup dnsmasq for DNS, DHCP and TFTP

Setup dnsmasq for DNS, DHCP and TFTP

To setup DNS, DHCP and TFTP server using dnsmasq, need to consider them separately.

Environment

To ease of setup and backup, consider use docker container to run dnsmasq.

Configure macvlan

As DHCP server requires special network communication, macvlan can be used for this purpose.

Create macvlan on interface bond0 with IP address 192.168.1.250

docker network create -d macvlan -o parent=eth0 --subnet=192.168.1.0/24 --gateway=192.168.1.254 --ip-range=192.168.1.250/32 my_macvlan_250

Configure bridge macvlan

By default, the host machine who configured macvlan communicates with macvlan container, in such case, the DNS server running in dnsmasq will not be accessable by host machine.

In order to allow host machine also use DNS service running in macvlan, following configuration needs to be done, which creates another macvlan in host as bridge mode with IP address 192.168.1.249, and use it to access macvlan in docker with IP address 192.168.1.250.

Add following lines in /etc/network/interfaces

up ip link add my_macvlan_249 link eth0 type macvlan mode bridge
up ip addr add 192.168.1.249/32 dev my_macvlan_249
up ip link set my_macvlan_249 up
up ip route add 192.168.1.250/32 dev my_macvlan_249

Untested setup

Other setup likes using normal bridge network interface on physical network interface, I have tried it, so maybe it is also working.

Start container

Start container and map container /data folder to /app/dnsmasq/data, which can be used to save configuration files

docker run --name dnsmasq -d -it --restart unless-stopped -v /app/dnsmasq/data:/data --network my_macvlan_250 dnsmasq

Above command will run following command in container

dnsmasq -q -d --conf-file=/data/dnsmasq.conf --dhcp-broadcast

Troubleshooting dnsmasq

In order to debug dnsmasq, following command can be used.

docker logs -f dnsmasq

Due to so many requests on DNS from everywhere, if only want to debug DHCP service, following command can be used, and it filter out lines start with dnsmasq: .

docker logs -f dnsmasq --since 1m | grep -v -e "^dnsmasq: "

The DHCP log messages start with dnsmasq-dhcp: .

docker logs -f dnsmasq --since 1m | grep -e "^dnsmasq-dhcp: "

Note: As suggested in configuration, comment log-queries should disable logs for DNS too, but looks like useless.

#log-queries
log-dhcp

Configure TFTP boot

Configure TFTP server

Enable TFTP server

enable-tftp
tftp-root=/data/tftp

Configure DHCP boot

Sample configuration to select boot file according to option client-arch

dhcp-match=set:efi-x86_64,option:client-arch,7
dhcp-match=set:efi-x86_64,option:client-arch,9
dhcp-match=set:efi-x86,option:client-arch,6
dhcp-match=set:bios,option:client-arch,0
dhcp-boot=tag:efi-x86_64,efi64/syslinux.efi
dhcp-boot=tag:efi-x86,efi32/syslinux.efi
dhcp-boot=tag:bios,bios/lpxelinux.0

Actual configuration

dhcp-match=set:efi-x86_64,option:client-arch,7
dhcp-boot=tag:efi-x86_64,ipxe.efi
#dhcp-boot=tag:efi-x86_64,grubx64.efi

Set tag for iPXEBOOT, and configure ipxe options

# set tag to IPXEBOOT when has option 175
dhcp-match=IPXEBOOT,175
#dhcp-match=set:ipxe,175 # iPXE sends a 175 option.

dhcp-boot=tag:!IPXEBOOT,undionly.kpxe,dnsmasq,192.168.1.250
dhcp-boot=tag:IPXEBOOT,boot.ipxe,dnsmasq,192.168.1.250

# Configure iSCSI for ipxe boot
#dhcp-option=175,8:1:1
#dhcp-option=tag:IPXEBOOT,17,"iscsi:192.168.1.17::::iqn.2012-12.net.bx:ds1812.pxe-ubuntu"
#dhcp-option-force=vendor:175, 190, user
#dhcp-option-force=vendor:175, 191, password

Configure DHCP

DHCP global configuration, and set host using files in /data/hosts folder, and dhcp-host using files in /data/ethers folder.

no-hosts
hostsdir=/data/hosts
#addn-hosts=/data/banner_add_hosts
dhcp-hostsdir=/data/ethers
dhcp-leasefile=/data/dnsmasq.leases
expand-hosts
dhcp-option=44,192.168.1.250 # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)
#dhcp-option=option:domain-search,bx.net,bianxi.com

DHCP Domain and rang

Following lines set up for dhcp hosts which are tagged as home

domain=bx.net,192.168.1.0/24
dhcp-range=tag:home,192.168.1.96,192.168.1.127,255.255.255.0,12h
dhcp-option=tag:home,option:router,192.168.1.254

DHCP mapping

To map MAC address to IP, tag, etc., use dhcp-host. Sample of mapping are shown below

dhcp-host=00:1b:77:07:08:af,set:home
dhcp-host=00:26:4a:18:82:c6,192.168.1.9,set:home
dhcp-host=win10,192.168.1.235,set:home

Note: contents in dhcp-host file, such as /etc/ethers should not have prefix of dhcp-host= as in main configuration file dnsmasq.conf does.

00:1b:77:07:08:af,set:home
00:26:4a:18:82:c6,192.168.1.9,set:guest
win10,192.168.1.235,set:home

DHCP reject unknown hosts

Using following configuration line to ignore all unknown hosts, so all hosts much registered using dhcp-host option.

dhcp-ignore=tag:!known

Guest domain

Another way to deal with unknown hosts is to setup guest network.

Following lines define a DHCP services for hosts without tag home

dhcp-range=tag:!home,192.168.1.128,192.168.1.143,255.255.255.0,4h
dhcp-option=tag:!home,option:router,192.168.1.254
dhcp-option=tag:!home,option:domain-name,guest.net
#dhcp-option=tag:!home,option:domain-search,guest.net

Another way is to define guest network range as below for those hosts with tag guest.

#domain=guest.net,192.168.1.0/24
#dhcp-range=tag:guest,192.168.1.128,192.168.1.143,255.255.255.0,4h
#dhcp-option=tag:guest,option:router,192.168.1.254

#dhcp-host=00:a0:98:5f:9e:81,set:guest

DHCP mapping consideration

The logic of DHCP tags is described below

  • Host request DHCP, then it has one tag, which is interface name, such as eth0

  • If it is mapped with one dhcp-host line, they will be tagged as known

  • Tags can be given by various ways

    • Set in dhcp-host line. For example, set guest in following line
    dhcp-host=00:a0:98:5f:9e:81,set:guest
    • Set by IP range
    dhcp-range=set:red,192.168.0.50,192.168.0.150
    • Set by host matching
    dhcp-vendorclass=set:red,Linux
    dhcp-userclass=set:red,accounts
    dhcp-mac=set:red,00:60:8C:*:*:*
  • Tags can be used by various ways

    • Used in IP range
    dhcp-range=tag:green,192.168.0.50,192.168.0.150,12h
  • Tags can be used in not condition

    dhcp-option=tag:!home,option:router,192.168.1.254

DHCP options

DHCP options and their numbers, can be found in DHCP log, such as below.

dnsmasq-dhcp: 2177430021 available DHCP range: 192.168.1.96 -- 192.168.1.127
dnsmasq-dhcp: 2177430021 available DHCP range: 192.168.1.128 -- 192.168.1.143
dnsmasq-dhcp: 2177430021 vendor class: MSFT 5.0
dnsmasq-dhcp: 2177430021 client provides name: baidu-windows
dnsmasq-dhcp: 2177430021 DHCPREQUEST(eth0) 192.168.1.113 00:a0:98:1d:b0:fc 
dnsmasq-dhcp: 2177430021 tags: home, known, eth0
dnsmasq-dhcp: 2177430021 DHCPACK(eth0) 192.168.1.113 00:a0:98:1d:b0:fc baidu-windows
dnsmasq-dhcp: 2177430021 requested options: 1:netmask, 3:router, 6:dns-server, 15:domain-name, 
dnsmasq-dhcp: 2177430021 requested options: 31:router-discovery, 33:static-route, 43:vendor-encap, 
dnsmasq-dhcp: 2177430021 requested options: 44:netbios-ns, 46:netbios-nodetype, 47:netbios-scope, 
dnsmasq-dhcp: 2177430021 requested options: 119:domain-search, 121:classless-static-route, 
dnsmasq-dhcp: 2177430021 requested options: 249, 252
dnsmasq-dhcp: 2177430021 bootfile name: undionly.kpxe
dnsmasq-dhcp: 2177430021 server name: dnsmasq
dnsmasq-dhcp: 2177430021 next server: 192.168.1.250
dnsmasq-dhcp: 2177430021 broadcast response
dnsmasq-dhcp: 2177430021 sent size:  1 option: 53 message-type  5
dnsmasq-dhcp: 2177430021 sent size:  4 option: 54 server-identifier  192.168.1.250
dnsmasq-dhcp: 2177430021 sent size:  4 option: 51 lease-time  12h
dnsmasq-dhcp: 2177430021 sent size:  4 option: 58 T1  6h
dnsmasq-dhcp: 2177430021 sent size:  4 option: 59 T2  10h30m
dnsmasq-dhcp: 2177430021 sent size:  4 option:  1 netmask  255.255.255.0
dnsmasq-dhcp: 2177430021 sent size:  4 option: 28 broadcast  192.168.1.255
dnsmasq-dhcp: 2177430021 sent size:  6 option: 15 domain-name  bx.net
dnsmasq-dhcp: 2177430021 sent size: 23 option: 81 FQDN  03:ff:ff:62:61:69:64:75:2d:77:69:6e:64:6f...
dnsmasq-dhcp: 2177430021 sent size:  4 option:  6 dns-server  192.168.1.250
dnsmasq-dhcp: 2177430021 sent size:  4 option:  3 router  192.168.1.254
dnsmasq-dhcp: 2177430021 sent size:  4 option: 44 netbios-ns  192.168.1.250

Configure DNS

Set up link DNS server

# DNS Server
server=165.21.83.88
#server=165.21.100.88
server=8.8.8.8

DNS mapping

DNS entries are defined as the format of /etc/host file

192.168.1.1     host1 host-alias

Sample configuration steps

Add a static IP entry for a known mac address

In ethers file, add following entry for DHCP

44:55:66:77:88:99,192.168.1.222,set:home

In banner_add_hosts file add following entry for DNS

192.168.1.222    cat

Firewalld conflict between Docker and KVM

Firewalld conflict between Docker and KVM

After install docker, KVM bridge network can not access anything on network.

Identify

To identify the issue came from firewall and created by docker, the following facts had been collected.

  • After rebooted server, VM can access network, and restart firewalld without issue
  • After start docker service, VM can not access network any more
  • Then VM can access network after stop firewalld, but docker can not start container, because iptables is not accessible

Issue

No matter how to change iptables rules, and accept all traffics from everywhere, but VM was still isolated.

Commands used

Following commands were used for troubleshooting

Firewalld

In fact, there is no chain, rule, or passthroughs in firewall-cmd output. But after stop firewalld, the iptables rules became empty.

systemctl restart firewalld
firewall-cmd --list-all
firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -i bridge0 -j ACCEPT
firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -o bridge0 -j ACCEPT
firewall-cmd --reload

firewall-cmd --permanent --direct --get-all-chains
firewall-cmd --permanent --direct --get-all-rules
firewall-cmd --permanent --direct --get-all-passthroughs
firewall-cmd --permanent --direct --remove-passthrough ipv4 -I FORWARD -o bridge0 -j ACCEPT

firewall-cmd --get-default-zone
firewall-cmd --get-active-zone
firewall-cmd --get-zones
firewall-cmd --get-services
firewall-cmd --list-all-zones

iptables

iptables -L -v
iptables -L -v FORWARD
iptables -I FORWARD -i br0 -o br0 -j ACCEPT
iptables -I FORWARD -j ACCEPT
iptables -I FORWARD 1 -j ACCEPT
iptables -d FORWARD 1
iptables-save
iptables-restore

others

Following commands are used to collect info and compare the differences between before and after.

brctl-show
ip a
netstat -rn

Potential issues

Following possiblities caused this issue or wrong troubleshooting

  • The iptables might not be used in the system, but the counters are refreshing.
  • Some rules in intables might not appearred in the iptables list

Debugging

For firewald, FIREWALLD_ARGS=--debug needs to be added into /etc/sysconfig/firewalld.

For iptables, -j LOG --log-prefix "rule description" needs to be added into iptables rules which require debugging.

Suggestions from others

Add ACCEPT rules

Run following commands to add ACCEPT rules

#!/bin/sh

# If I put bridge0 in trusted zone then firewalld allows anything from 
# bridge0 on both INPUT and FORWARD chains !
# So, I've put bridge0 back into the default public zone, and this script 
# adds rules to allow anything to and from bridge0 to be FORWARDed but not INPUT.

BRIDGE=bridge0
iptables -I FORWARD -i $BRIDGE -j ACCEPT
iptables -I FORWARD -o $BRIDGE -j ACCEPT

Conclusion

After many testings, found that docker is directly adding rules into iptables, not go thru firewalld. This can be noticed using following steps.

  1. Stop both firewalld and docker, iptables has no rules
  2. Start docker, iptables has only docker's rules
  3. Start filewalld, in short period time, LIBVIRT rules appear, after seconds, replaced by docker rules

Another testing

  1. Stop both firewalld and docker again
  2. Start firewalld, only the LIBVIRT rules appear
  3. Start docker, both docker and LIBVIRT rules appear

One issue was facing during reboot, if both docker and firewalld are enabled, the server might hung during reboot, maybe this is because root filesystem is on iSCSI disk, but can not confirm.

Above behaivor shows iptables is not supporting firewalld, which directly inserts rules into iptables periodically, which corrupts firewalld rules.

Solution

Run script

This solution disables firewalld and enable docker

systemctl disable firewalld
systemctl enable docker

Then run following command to add iptables rules to enable traffics

iptables -I FORWARD -i br0 -j ACCEPT
iptables -I FORWARD -o br0 -j ACCEPT

This script can be put in /etc/rc.local, which will be executed when during boot up.

Install iptables services

This solution also disables firewalld and enable docker as previous solution, then add two FORWARD rules into default iptables rules /etc/sysconfig/iptablesas below.

# sample configuration for iptables service
# you can edit this manually or use system-config-firewall
# please do not ask us to add additional ports/services to this default configuration
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
-A FORWARD -o br0 -j ACCEPT
-A FORWARD -i br0 -j ACCEPT
:OUTPUT ACCEPT [0:0]
#-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
#-A INPUT -p icmp -j ACCEPT
#-A INPUT -i lo -j ACCEPT
#-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
#-A INPUT -j REJECT --reject-with icmp-host-prohibited
#-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

Then both LIBVIRT and docker will add their rules later after system started.

Modify firewalld rules

For this solution, failed last time, I will try it again later.

firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -i bridge0 -j ACCEPT
firewall-cmd --permanent --direct --passthrough ipv4 -I FORWARD -o bridge0 -j ACCEPT

Feature

If possible, define firewalld rules which cover both LIBVIRT and docker.

References

Configure FirewallD to allow bridged virtual machine network access
Debug firewalld
How to configure iptables on CentOS

Less related topic
Do I need to restore iptable rules everytime on boot?
need iptables rule to accept all incoming traffic

Configure trust self generated CA certificate of docker registry

Configure trust self generated CA certificate of docker registry

When self generated CA certificate has not been trusted by docker client, following error occurres

... x509: certificate signed by unknown authority

Install CA certificate for docker only

Docker can install registry CA as /etc/docker/certs.d/<registry[:port]>/ca.crt. For example,

/etc/docker/certs.d/my-registry.example.com:5000/ca.crt

Note: If port is 443, it should be omitted. Otherwise, it won't work.

Install CA certificate into system folder

To install self generated CA certificate for operating system, follow the page below.

Install self generated CA certificate into Linux OS

Restart docker service to take effect

The restart docker service after CA certificate installed.

systemctl restart docker

Backup docker container using shell script

j# Backup docker container using shell script

Backup

Using following shell script to backup docker container with date tag

#!/bin/bash
# backup-docker.sh <container_name> <registry_path>

container=$1            # <container_name>
repo_prefix=$2          # <registry>/<prefix>
registry=${repo_prefix//\/*/}

repo_name=$repo_prefix/`hostname`/$container
repo_path=$repo_name:`date +%Y%m%d`

docker commit $container $repo_path
docker login $registry
docker push $repo_path

Note: If following certification error occurred, follow the page below to install ceritficate.

Configure trust self generated ca certificate of docker registry

List repo

Using following shell command to list repo list in docker registry

curl https://bianxi:$PASSWORD@${registry}/v2/_catalog

Sample output

{"repositories":["bianxi/dnsmasq","bianxi/heart/dnsmasq"]}

List tags

Using follwing shell command to list tags for one repo in docker registry

echo curl https://bianxi:$PASSWORD@${registry}/v2/${repo}/tags/list

Sample output

{"name":"bianxi/heart/dnsmasq","tags":["20210620","20210621","20210622","20210623","20210624","20210625","20210626","20210627"]}

Get digest for tag

Get digest by pull image

docker pull registry.bx.net/bianxi/heart/dnsmasq:20210624

Sample output

20210624: Pulling from bianxi/heart/dnsmasq
...
22b5d63ad977: Already exists
8e2e66517d7e: Pull complete
Digest: sha256:7535af1f65524f9200b901fc31b9c779819e45c0502ef99605666842a319908f

Verify digest

curl https://bianxi:$PASSWORD@registry.bx.net/v2/bianxi/heart/dnsmasq/manifests/sha256:xxxxxxxxxxxxxxxx
curl -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET https://bianxi:$PASSWORD@registry.bx.net/v2/bianxi/heart/dnsmasq/manifests/20210624 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}'

Delete local repo

docker rmi registry.bx.net/bianxi/heart/dnsmasq:<tag>

Delete tag

curl -X DELETE https://bianxi:$PASSWORD@registry.bx.net/v2/bianxi/heart/dnsmasq/manifests/sha256:xxxxxxxxxxxxxxxx

Run garbage-collect

docker exec registry bin/registry garbage-collect --delete-untagged /etc/docker/registry/config.yml

Restart registry if necessary

docker restart registry

Increase client_max_body_size in NGINX for docker registry

Increase client_max_body_size in NGINX for docker registry

Error "413 Request Entity Too Large" occurred when push image to docker registry.

To fix this issue, add client_max_body_size in NGINX configuration file for docker registry as below, then restart NGINX.

server {
    listen 443;
    server_name hub.bx.net docker.bx.net registry.bx.net dockerhub.bx.net;

    location /v2/ {
        proxy_pass https://registry.my_bridge/v2/;
        proxy_set_header  Authorization $http_authorization;
        proxy_pass_header Authorization;
    }

    client_max_body_size 100M;
}

TODO: Change wordpress TCP port

Change wordpress TCP port

After change port in settings, also redeployed dockers, the website is unreachable.

Change port

Update in setting of wordpress

Update docker-compose.yml file

Destory and recreate dockers

docker-compose down
docker-compose up -d

Note: Failed

Change back port

Change port back using by update option values database.

Access mariadb docker

docker exec -it wp_db_1 bash

Login to mariadb

mysql -u wordpress -p

Search option value

MariaDB [wordpress]> select * from wp_options where option_value like '%192.168.1.14%';
+-----------+-------------+------------------------+----------+
| option_id | option_name | option_value           | autoload |
+-----------+-------------+------------------------+----------+
|         1 | siteurl     | http://192.168.1.14:80 | yes      |
|         2 | home        | http://192.168.1.14:80 | yes      |
+-----------+-------------+------------------------+----------+
2 rows in set (0.058 sec)

Update value back

MariaDB [wordpress]> update wp_options set option_value='http://192.168.1.14:8080' where option_value='http://192.168.1.14:80';
Query OK, 2 rows affected (0.008 sec)
Rows matched: 2  Changed: 2  Warnings: 0

MariaDB [wordpress]> select * from wp_options where option_value like '%192.168.1.14%';
+-----------+-------------+--------------------------+----------+
| option_id | option_name | option_value             | autoload |
+-----------+-------------+--------------------------+----------+
|         1 | siteurl     | http://192.168.1.14:8080 | yes      |
|         2 | home        | http://192.168.1.14:8080 | yes      |
+-----------+-------------+--------------------------+----------+
2 rows in set (0.058 sec)

MariaDB [wordpress]> quit

Docker Compose – wordpress

Docker Compose - wordpress

Simple steps to start using docker compose to create wordpress dockers.

Installation

Install docker-compose package

Run following command on ubuntu and armbian servers.

apt install docker-compose

Create dockers

Create folder as project name wp

The project name will be used as a part of docker container name.

mkdir -p /app/wp

Create docker compose file

Using vi to create file docker-compose.yml in directory /app/wp

version: "3.3"

services:
  db:
    image: mariadb:latest
    volumes:
      - db_data:/var/lib/mysql
    ports:
      - "3306:3306"
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: somewordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    volumes:
      - wordpress_data:/var/www/html
    ports:
      - "80:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_NAME: wordpress
volumes:
  db_data: {}
  wordpress_data: {}

Run docker compose command

docker-compose up -d

Destroy dockers

Run docker compose command

docker-compose down

Destroy dockers and their volumes

docker-compose down --volumes

TODO: Change docker container mapping port

Change docker container mapping port

To change the running container mapping port with or without recreating container.

By recreating container

Stop and commit running container, then run new container using new image.

This requires changing image name and knowing the docker run command parameters.

docker stop test01
docker commit test01 test02
docker run -p 8080:8080 -td test02

Modify configuration file

Stop the container and docker service, then change the docker container configuration file hostconfig.json. After that, start docker service and container.

This requires updating docker run command document.

  1. Stop docker.
docker stop test01
systemctl stop docker
  1. Edit hostconfig.json file
vi /var/lib/docker/containers/[hash_of_the_container]/hostconfig.json

or following file when using snap

/var/snap/docker/common/var-lib-docker/containers/[hash_of_the_container]/hostconfig.json
  1. Start docker
systemctl start docker
docker start test01