Category: docker

Building dnsmasq docker-compose file with DHCP enabled

Building dnsmasq docker-compose file with DHCP enabled

Creating Dockerfile

Create Dockerfile as below, the VOLUME /data is pointing to configuration folder

FROM ubuntu:latest

VOLUME /data

RUN apt update
RUN apt install dnsmasq -y
RUN apt install iproute2 -y

CMD dnsmasq -q -d --conf-file=/data/dnsmasq.conf --dhcp-broadcast

Creating docker-compose.yml file

Must add cap_add parameter.

version: '2'

services:
  dnsmasq:
    container_name: dnsmasq
    image: dnsmasq
    build:
      context: .
      dockerfile: Dockerfile.dnsmasq
    restart: unless-stopped
    volumes:
      - /app/dnsmasq/data:/data
    networks:
      - my_macvlan_250
    cap_add:
      - NET_ADMIN

networks:
  my_macvlan_250:
    external: true

This is the same as below command

docker run --cap-add=NET_ADMIN --name dnsmasq -d -it --restart unless-stopped -v /app/dnsmasq/data:/data --network my_macvlan_250 dnsmasq

References

Update NextCloudPi

Update NextCloudPi

If go thru NextCloudPI WebUI to update NextCloudPi or NextCloud running by NextCloudPi in Docker, errors could be occurred, which can lead the user not found error. The correct way to do is recreate container using new image.

Note: /data in container must be mapped or backed up.

Update NextCloudPI Image

docker image pull ownyourbits/nextcloudpi-x86

Remove existing container

docker stop nextcloudpi
docker rm nextcloudpi

Recreate container

Use the previous docker parameter, make sure /data was mapped.

docker run -d -p 4443:4443 -p 443:443 -p 80:80 -v /app/nc/data:/data --name nextcloudpi ownyourbits/nextcloudpi-x86 $IP

Update NextCloudPi

Login to NextCloudPi, the NextCloudPi will update itself, need to wait for the process pigz completed.

Restore data from backup (If required)

Go to Backup => nc-backup-auto in NextCloudPi WebUI, to find out the backup path, and restore the latest if required.

Manual Update NextCloud

Go to Updates => nc-update-nextcloud in NextCloudPi WebUI, to update NextCloud Manually.

Learning – Docker

Learning - Docker

Just to refresh my Docker knowledge.

Logs

docker logs -f --tail 100 nginx

Network

docker network create mongo-network

mongo and mongo-express

docker-compose

docker-compose -f docker-compose.yaml up
docker-compose -f docker-compose.yaml down

Dockerfile

FROM nginx:1.10.2-alpine
MAINTAINER my@example.com
ENV

RUN

COPY ./nginx.conf /etc/nginx/nginx.conf

CMD

build

docker build -t my-app:1.0 .

AWS ECR

Fully-managed Docker container registry

Default registry

docker pull mongo:4.2

same as

docker pull docker.io/library/mongo:4.2

Tag

docker tag my-app:latest <reg>/my-app:latest

Push

docker push <reg>/my-app:latest

Volume

c:\programData\docker\volumes
/var/lib/docker/volumes

In Mac

# screen ~/Library/Containers/com.docker.docker/Data/com.docker.amd64-linux/tty
# ls /var/lib/docker/volumes

References

Docker Tutorial for Beginners [FULL COURSE in 3 Hours]

Learning – Dockerfile

Learning - Dockerfile

This is to refresh my Dockerfile knowledge.

alphine

Create Dockerfile

FROM alphine:3.4
MAINTAINER Mark Takacs mark@takacsmark.com

RUN ark update
RUN ark add vim
RUN apk add curl

Build

docker build -t taka/alpine-smarter:1.0 .

Intermediate images

The Docker intermediate images can speed up the rebuilding process.

docker images -a

To reduce number of intermediate images, update Dockerfile as below

FROM alphine:3.4
MAINTAINER Mark Takacs mark@takacsmark.com

RUN ark update && \
       ark add vim && \
       apk add curl

Clean up dangling image

docker images --filter "dangling=true"
docker rmi $(docker images -q --filter "dangling=true")

python

Dockerfile

  • For normal python
FROM python:3.6.1

RUN pip install numpy
  • For alpine version takes much longer time to build
FROM python:3.6.1-alpine

RUN apk update && apk add build-base
RUN ln -s /usr/include/locale.h /usr/include/xlocale.h

RUN pip install numpy scipy

conda3

  • miniconda3
docker run --rm -ti continuumio/miniconda3 /bin/bash
conda list
conda install numpy
  • anaconda3
docker run --rm -ti continuumio/anaconda3 /bin/bash
conda list

phpslim

Dockerfile

Choose php:7.1.2-apache

Got to https://getcomposer.org, and run installation commands in container

FROM php:7.1.2-apache

RUN ....

COPY ./composer.json /var/www/html/

RUN apt-get update & apt-get install -y git

RUN composer install

Map directory

To run docker container with option -v /var/www/html/vendor to indicate using image /var/www/html/vendor folder.

docker run --rm -v $(pwd):/var/www/html/ -v /var/www/html/vendor -p 80:80 takacsmark/phpslim-tut:1.0

As the result, the /var/www/html is mapped to local directory, but /var/www/html/vendor is image directory.

Change DocumentRoot

Add following line into Dockerfile to change the DocumentRoot to /var/www/html/public

RUN set -i 's/DocumentRoot.*$/DocumentRoot \/var\/www\/html\/public/' /etc/apache2/sites-enabled/000-default.conf

Enable default hello message

Create a file /var/www/html/public/.htaccess

RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [QSA,L]

Run following command in container to enable module rewrite

a2enmod rewrite

Then add following line in Dockerfile

RUN a2enmod rewrite

Options

COPY

COPY ./config.txt /usr/src/config.txt

ADD

  • Add URL file from internet

  • Add tar file

  • Add compressed file

RUN

EXPOSE

Port

USER

WORKDIR

ARG

LABEL

ENTRYPOINT & CMD

To run command echo Welcome after container created, following configuration can be used.

CMD "Welcome"
ENTRYPOINT echo

The CMD option can be replace using docker run command line, ENTRYPOINT can't. For example, following command will run echo Hello.

docker run echo_image 'Hello'

ENV

References

Dockerfile Tutorial by Example - ( Part I - Basics )
Dockerfile Tutorial by Example - ( Part II - Best practices )
Dockerfile Tutorial by Example - ( Part III - Creating a docker PHP Slim image )

Learning – Docker Swarm – Basic

Learning - Docker Swarm - Basic

Files

server.js

const express = require("express");
const os = require("os");

const app = express();

app.get("/", (req, res) => {
  res.send("Hello from Swarm " + os.hostname());
});

app.listen(3000, () => {
  console.log("Server is running on port 3000");
});

Dockerfile

FROM node:11.1.0-alpine

WORKDIR /home/node

COPY . .

RUN npm install

CMD npm start

docker-compose.yml

version: "3"

services:
  web:
    build: .
    image: takacsmark/swarm-example:1.0
    ports:
      - 80:3000
    networks:
      - mynet
    deploy:
      replicas: 6
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure

  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  mynet:

requirements.txt

flash
redis

Nodes

docker swarm init
docker swarm init --advertise-addr eth1
docker swarm join ...
docker swarm leave -f
docker demote
docker promote

Stack

docker stack ls
docker stack ps

Services

docker service ls
docker service ps
docker service create

To run a service in all nodes

docker service create --name 'service_name' -p 8000:8000 --mode global demoapp

To run a server in 3 nodes

docker service create --name 'service_name' -p 8000:8000 --replicas 2 demoapp

Tasks

It is scheduling the container, which is managed using docker service command.

docker service ps nodeapp_web

Container

The containers should be managed using swarm commands, but they also can be seen using container command.

docker ps
docker kill

Deploy

docker stack deploy -c docker-compose.yml nodeapp

This will create network nodeapp_mynet, and two services (nodeapp_web and nodeapp_db)

Note: The images need to be pre-built using docker-compose build command, and push into docker hub.

List

docker stack ls
docker service ls
docker stack services ls

scale

docker service scale nodeapp_web=4

To access all 4 services, just need to access http://localhost, then the 4 services will be supporting same port on localhost.

Note: The hostname appeared in http://localhost is changing

Sample of overlay network

  • Create overlay network
docker network create -d overlay myoverlay1
  • Create webapp on overlay network
docker service create --name webapp1 -d --network myoverlay1 -p 8001:80 test/webapp
  • Create database on overlay network
docker service create --name mysql -d --network myoverlay1 -p 3306:3306 test/mysql:5.5

Sample with docker-machine

Docker machine is to create a virtual machine with minimum Linux packages installed with docker running.

docker-machine ls
docker-machine start myvm1
docker-machine start myvm2
docker-machine ssh myvm1

Init swarm

docker swarm init --advertise-addr eth1
docker swarm join --token ...

Push image

docker-compose push

Access master from docker machine

To access docker swarm master in docker machine from other host

docker-machine env myvml
eval $(docker-machine env myvm1)

Deploy

docker stack deploy -c docker-compose.yml nodeapp
docker stack ls
docker stack services nodeapp
docker-machine ls

After the services started, all nodes, including master and worker nodes, will provide the services by routing to the correct host.

replicas and parallelism

    deploy:
      replicas: 6
      update_config:
        parallelism: 2
        delay: 10s

This indicates total of 6 replicas, and deploy 2 in parallel with 10 seconds delay.

constraints

    deploy:
      placement:
        constraints: [node.role == manager]

This indicates that only manager node will be deployed.

Scale

docker service scale nodeapp_web=4

Monitor

docker stack deploy -c docker-compose.monitoring.yml nodemon
docker stack ls
docker stack services nodemon

The service nodemon_visualizer running on port 8080

Deploy to specific node

Label

docker node update myvm1 --label-add db=mongo
docker node inspect myvm1 -f {{.Spec.Labels}}

placement

    deploy:
      placement:
        constraints: [node.labels.db == mongo]

Redeploy

docker stack deploy -c docker-compose.yml nodeapp

Drain & Active

docker node update --availability drain myvm2
docker node update --availability active myvm2

force update

docker node update --availability=active myvm2
docker service update --force nodeapp_web

Change version number

Update image: option in docker-compose.yml file

Then build again

docker-compose build
docker-compose push
docker stack deploy -c docker-compose.yml nodeapp
docker stack ps nodeapp

Then the containers will redeploy 2 at each time.

Cloud

AWS supports Docker Swarm.

References

Introduction to Docker Swarm | Tutorial for Beginners | Examples | Typical Tasks (Video)

Docker Swarm Tutorial | Code Along | Zero to Hero under 1 Hour

Learning – Docker Prune

Learning - Docker Prune

Container

docker container prune
docker stop $(docker ps -q)
docker container ls -a
docker container prune -f
docker --rm ...
docker swarm

Image

docker image ls
docker image prune -f

Network

docker network ls
docker network prune -f

Volume

docker volume ls
docker volume prune -f

System

docker system prune
docker system prune --volumes
docker system prune --volumes --all
docker system prune --volumes --help

References

Docker prune explained - usage and examples

Learning – Docker Swarm Network Drivers

Learning - Docker Swarm Network Drivers

Bridge

The default network driver. Needs to map the port to host in order to access port of container.

Host

Removes network isolation between the container and the Docker host, and uses the host's networking directly. So the containers can not have port conflicting with other containers and also host.

The IP will be the same as host.

None

Disables all networking for containers. Usually used in conjunction with a custom network drive.

Overlay

Connect multiple Docker daemons together and enable swarm services to communicate with each other daemons.

Using this overlay network, the container on different hosts can communicate with each other.

Macvlan

Allow you to assign a MAC address to a container, making it appears as a physical device on the network. The Docker daemon routes traffic to container by their MAC addresses.

This allows container has different IP address on the host network.

LXC/LXD vs Docker

LXC/LXD vs Docker

Proxmox supports LXC, TrueNAS supports kubernates. The difference between LXC and docker container is, the LXC runs full OS without kernel, docker container only runs application.

Persistent docker container

Docker container also can be saved as image to be used next time. But the execution parameters can not be saved. To relaunch again, docker compose file can be a good choice if no change after container created.

LXC is persistent

LXC is a running VM sharing kernel and drivers with host, so OS and it's configue are in LXC.

The disadvantages of LXC are

References

LXC/LXD vs Docker Which is better?
Linux Container (LXC) Introduction

Unable to list docker containers

Unable to list docker containers

Issue

When list docker containers using docker ps -a command, there is nothing in the list, but using systemctl status docker can see running processes. And the docker container services are running in the servers.

Fix

The issue was fixed by removing docker and lxc from snap.

References

Docker containers running, but not showing up in docker ps

Docker Compose – Flask/Redis sample

Docker Compose - Flask/Redis sample

Files

app.py

from flask import Flask, request, jsonify
from redis import Redis

app = Flask(__name__)
redis = Redis(host="redis", db=0, socket_timeout=5, charset="utf-8", decode_responses=True)

@app.route('/', methods=['POST', 'GET'])
def index():

    if request.method == 'POST':
        name = request.json['name']
        redis.rpush('students', {'name': name})
        return jsonify({'name': name})

    if request.method == 'GET':
        return jsonify(redis.lrange('students', 0, -1))

Dockerfile

FROM python:3.7.0-alpine3.8

WORKDIR /usr/src/app

COPY requirements.txt ./

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV FLASK_APP=app.py

CMD flask run --host=0.0.0.0

Build

docker-compose build

Network

docker network ls

This will show a network named as flash-redis_default created, which used by both flash and redis containers. The containers can use service name to communicate to each other.

Use parameters and arguments

Create a file called .env

PYTHON_VERSION=3.7.0-alpine3.8
REDIS_VERSION=4.0.11-alpine
DOCKER_USER=takacsmark

In docker-compose.yml file, replace environment session with env_file: .env.txt session, then use environment variable to define argument

version: "3"

services:
  app:
    build:
      context: .
      args:
        - IMAGE_VERSION=${PYTHON_VERSION}
    image: ${DOCKER_USER}/flask-redis:1.0
    env_file: .env.txt
    ports:
      - 80:5000
    networks:
      - mynet
  redis:
    image: redis:${REDIS_VERSION}
    networks:
      - mynet
    volumes:
      - mydata:/data

networks:
  mynet:
volumes:
  mydata:

In Dockerfile use argument from docker-compose.yml file

ARG IMAGE_VERSION

FROM python:$IMAGE_VERSION

WORKDIR /usr/src/app

COPY requirements.txt ./

RUN pip install --no-cache-dir -r requirements.txt

COPY . .

ENV FLASK_APP=app.py

CMD flask run --host=0.0.0.0

docker and docker-compose commands

docker-compose ps
docker-compose logs
docker-compose logs -f app
docker-compose stop
docker-compose start
docker-compose restart
docker-compose kill
docker-compose top

exec

docker-compose exec redis redis-cli lrange students 0 -1
docker-compose exec app /bin/bash

run

Start a new container, but port mapping will not be applied for new container by default.

docker-compose run ls -al

Desired state

Docker compose is defining the desired state, which will update the existing containers after configuration changed if possible.

For example, update port in doctor-compose.yml file, once run docker-compose up -d again, only app server will be recreated.

Note: Don't need to destroy existing containers first before run up again.

Scale

docker-compose up --scale app=3

Note: If ports session defined, there will be conflicting of port map.

Network

In docker-compose.yml file

services:
  app:
      networks:
      - mynet

networks:
  mynet:

Then a network called <project_name>_mynet will be created.

The old network can be pruned

docker network prune

External network

Network can be defined outside of docker-compose.yml file, which indicated as external network

networks:
  outside:
    external:
      name: actual-name-of-network

volumes

docker volumes ls

In docker-compose.yml file

services:
  redis:
    volumes:
      - mydata:/data

volumes:
  mydata:

Then a volume called <project_name>_mydata will be created.

Configuration Override

Single override file

The options in docker-compose.override.yml will overwrite or extend options in docker-compose.yml.

Note: Array options will be extended.

Multiple override files

docker-compose -f docker-compose.yml -f docker-compose.override.yml -f docker-compose.dev.yml up -d

Migrate to cloud

Push images to docker hub

docker-compose push

Pull and run in play with docker (PWD)

docker-compose pull
docker-compose up -d

Sample data

Add sample data into redis

curl --header "Content-Type: application/json" \
--request POST \
--data '{"name":"Bruno"}' \
http://ipxxxxxxxxx.direct.labs.play-with-docker.com/

To access the sample data

curl http://ipxxxxxxxxx.direct.labs.play-with-docker.com/

Compose file reference

https://docs.docker.com/compose/compose-file/

Note: some options are only working in swarm mode or compose mode. For example, deploy only works in swarm mode, build only works in compose mode.

References

Docker compose tutorial for beginners by example [all you need to know] (Video)
Docker compose tutorial for beginners by example [all you need to know]