IT-Wissen/docker/README.md
2025-04-24 15:50:34 +02:00

15 KiB
Raw Blame History

Table of Contents

Docker

Summary

Docker simplifies the development and deployment of applications by providing a lightweight, portable, and consistent containerized environment. It bridges the gap between development and production, enabling developers to focus on building applications without worrying about environment-specific issues. The applications run consistently across different computing environments, whether on a developer's laptop, a test server, or in production.

Advantages of Docker

  • Portability: Containers ensure applications behave the same regardless of the environment (development, testing, production).
  • Efficiency: Containers use shared OS resources, making them faster and less resource-intensive compared to VMs.
  • Scalability: Docker enables rapid scaling of applications by spinning up multiple container instances as needed.
  • Isolation: Each container runs independently, preventing conflicts between applications.

Key Concepts of Docker

Containers:

Containers are lightweight virtualized environments that package an application along with its libraries, dependencies, and configuration files. Unlike traditional virtual machines (VMs), containers share the host system's kernel, making them faster and more resource-efficient.

Images:

Docker images are the building blocks for containers. An image is a static snapshot of an environment that contains all necessary dependencies for an application. Images are created using a Dockerfile and can be stored and shared via a Docker registry like Docker Hub.

Docker Engine:

The Docker Engine is the runtime responsible for building and running containers.

Dockerfile:

A text file containing instructions to build a Docker image (e.g., which base image to use, dependencies to install, files to copy).

Docker Compose:

A tool to define and run multi-container applications using a YAML file.

How Docker Works

Build Phase:

Developers write a Dockerfile to specify the base image (e.g., Ubuntu, Node.js, Python) and define how the application and its dependencies should be configured. Using the command docker build, Docker creates a layered image.

Run Phase:

Using the docker run command, Docker launches a container based on the built image. Containers start in a matter of seconds.

Networking:

Docker creates isolated networks for containers to communicate with each other and the outside world securely.

Storage:

Docker provides volumes for persistent storage, ensuring data remains even if a container is restarted or removed.

Container Orchestration:

Tools like Docker Compose and Kubernetes are used to manage and scale multiple containers in production environments.

Workflow example

  1. Write a Dockerfile to package the application.
  2. Build the Docker image using docker build.
  3. Run the image as a container using docker run.
  4. Use Docker Compose to manage multiple containers for a complete application (e.g., web server + database).

Docker Image

Docker images are the building blocks for containers. An image is a static snapshot of an environment that contains all necessary dependencies for an application.

Images can either be built, or existing images can be pulled from a registry.

Docker Registry

By default, Docker pulls images from Docker Hub, the default public registry for Docker images.
For example image: 'jc21/nginx-proxy-manager:latest' Docker will search for the image jc21/nginx-proxy-manager on Docker Hub and pull the latest tag (or version).

If the image is hosted on a different container registry (e.g., Amazon Elastic Container Registry, Google Container Registry, or a private registry), you must provide the full registry URL as a prefix, like e.g. image: 'myregistry.example.com/myimage:latest'. Docker will pull the image from myregistry.example.com.

Before attempting to download the image, Docker checks if the image already exists locally in the cache. If found, it uses the local copy.

If the registry requires authentication, you must log in using docker login <registry_url> or configure credentials in the Docker Compose file.

docker login
docker pull
etc.

Own App: Dockerfile

Dockerfile ist eine einfache Textdatei, mit der man eigene Images bauen kann. Sie basieren immer auf einem bestehenden base Image (z.B. nginx:latest oder node:16.13.0-alpine). Mit docker build wird das image erstellt, bevor man es mit docker run starten kann.

Dockerfile documentation: https://docs.docker.com/reference/builder

Build Docker image

Im Ordner wo das Dockerfile liegt, ausführen:

docker build .

Um dem Image einen Namen und einen Tag zu geben (1.0 ist im folgenden Beispiel der Tag)

docker build -t node-app:1.0 .

Layers

The commands RUN, COPY and ADD may possibly generate a new layer, if the files have changed. Otherwise, the cached layers will be used.

If another image is based on the same base image, like e.g. node:16:13.0-alpine, then docker knows it and will not include it again in another image. docker images will show the aggregated image size, not the real size. Second image would actually be much less if base image is already in another image.

Example

# Image to build upon
FROM node:16:13.0-alpine

# create user "node"
USER node 

# set working directory
WORKDIR /home/node

# add files that will not change first
# creates new layer if file changes:
ADD --chown=node:node package.json .
# creates new layer if file changes:
ADD --chown=node:node package-lock.json .

# ... so that npm can be installed based on these files
# creates new layer if previous files had changed:
RUN npm install

# each RUN creates new layer, so it migth by good to group them. For example, combine "admin" tools in one RUN and then another RUN for the stuff that is needed for the app. 
RUN apk update && \
    apk add curl wget
RUN npm install

# and then add the rest of the files, like app.js
# creates new layer if files change:
ADD --chown=node:node . .

# execute node app.js
CMD [ "node", "app.js" ]

Multi stage build

Used for pre-compiled stuff, for example typescript. The following will create 2 images. The second will only install production dependencies.

FROM node:16:13.0-alpine AS builder

USER node 
WORKDIR /home/node

ADD --chown=node:node package.json .
ADD --chown=node:node package-lock.json .

RUN npm install

ADD --chown=node:node . .

RUN npx tsc

# ---------------------------------------

FROM node:16:13.0-alpine

USER node 
WORKDIR /home/node

COPY --from=builder /home/node/package.json ./package.json
COPY --from=builder /home/node/package-lock.json ./package-lock.json
COPY --from=builder /home/node/build ./build
COPY --from=builder /home/node/public ./public

RUN npm install --production

CMD [ "node", "app.js" ]

Show all docker images

docker images 
docker image ls   # alternativ

Example:

rogrut@zidbacons02:/$ docker images
REPOSITORY                                                          TAG              IMAGE ID       CREATED         SIZE
docker                                                              dind             0f7ea23310b3   3 weeks ago     397MB
docker                                                              <none>           7a9eec921ea3   2 months ago    378MB
cr.gitlab.uzh.ch/dba/digicert/export-digicert/main                  e9af5b08         95248f27c850   2 months ago    261MB
caddy                                                               latest           1b7d0a82297a   3 months ago    48.5MB
alpine                                                              <none>           b0c9d60fc5e3   3 months ago    7.83MB
curlimages/curl                                                     latest           7551dbeefe0d   4 months ago    21.8MB

Delete Docker image

docker rmi <REPOSITORY:TAG>
docker rmi ubuntu:2010

Add Tag to Docker image

The following will clone the image with a new tag, but has the same IMAGE ID.

docker tag <REPOSITORY>:<TAG> <REPOSITORY>:<new tag, e.g. 1.0 instead of latest>

Push Docker image to repository

the following will clone an image and add a tag that can be used to push to a repository. It will then push it to the repo.

docker tag myApp:latest myApp:latest repository-example.com/rogerrutishauser/myApp:latest
docker login
docker push

Docker Container

A Docker container is a Docker image that is currently being executed. If an image is executed with docker run, it is referred to as a container. It is comparable to a process.

Run docker image

A Docker image, which was either pulled from a registry or was built on the system, can be run with:

docker run --init -d -p 3000:80 -v /home/roger/meine-dateien:/home/daten -e FOO='bar' -FOO2='bar2' --network testnetwork --network-alias myapp --name myApp node-app:1.0
  • --init Optional, but highly recommended. Tells Docker to run a tiny init process (based on Tini) as PID 1 inside the container before your command.
  • -d Optional. For detached mode (run in background)
  • -p 3000:80 Optional. Traffic on port 3000, and port 80 inside the container
  • -v Optional. Bind mount
  • -e Optional. Environment Parameter, that will be passed on to the container
  • --network testnetwork Optional. Use own network instead of bridge.
  • --network-alias myapp Optional. Name of the network instance. Used if you want to access a database in container from the app container, instead of using the IP of the container.
  • --name Optional. Name for container.
  • --restart Optional. Restart policy. no, on-failure:5, always
  • node-app:1.0 image/tag to be started.

Share data

which are stored and handled internally by docker (c.f. docker config to choose where they are actually stored).

Bind mounts

Directories on the host system are mapped into the container with -v.

docker run -v <HOST DIR>:<DOCKER DIR>

# Example:
docker run -it -v ~/Desktop/LokaleDaten:/home/daten --name Datentest ubuntu:20.04 /bin/bash

Named volumes

A volume is a folder somewhere on the host (depending on the OS, can be configured in docker config). It doesn't really matter where it is. Create it:

# create volume "myfiles"
docker volume create myfiles

# check if it exists:
docker volume ls

You can then connect it to the container with:

docker run -v <docker volume name>:<DOCKER DIR>

# Example:
docker run -v myfiles:/home/roger/daten --name Datentest ubuntu:20.04 /bin/bash

Delete volume:

docker volume rm myfiles

# delete all unused volumes, that are not being used by a running container
docker volume prune

Data container

You can create a container that only serves as a data box. The following creates such a container with the volume /data/db. It must not run, therefore we use just docker create.

docker create -v /data/db --name datenbox busybox true

BusyBox is a small, monolithic program that bundles a variety of standard Unix tools in a single executable file. It is often referred to as the "Swiss army knife" for embedded or lightweight Linux systems. BusyBox also contains the command true, which always returns with exit code 0 and is used in scripts or as a placeholder command (as with your docker create ... busybox true).

Now you can use the data container in any other container with --volumes-from, like this:

docker run -it --volumes-from datenbox alpine:latest /bin/sh
cd /data/db
touch hello.txt   # the file is now in the data container
exit

Check if the file is there:

  1. With a "helper" container
docker run --rm --volumes-from datenbox busybox ls /data/db
  1. Copy the file to the local host
docker cp datenbox:/data/db/hello.txt ./hello.txt

Environment Variables

Set env variable and pass it to the container

docker run -it --name TestEnv -e MY_VAR="hello world" ubunut:20.04 /bin/bash
env

Use existing env variable from the host and pass it to the container

docker run -it --name TestEnv -e MY_VAR ubunut:20.04 /bin/bash
env

Pass a list of env variables to the container

docker run -it --name TestEnv -env-file ./env.list ubunut:20.04 /bin/bash
env

Docker Networking

The mostly used type of docker network is bridge, and has also the name "bridge". Every running container uses the bridge network by default. This also means that all container share the same network, if not otherwise defined.

Show current networks:

docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
0dd07c40b577   bridge    bridge    local
e33da267b703   host      host      local
54c49b8d1c44   none      null      local

Create individual bridge network

docker network create testnetwork

Start a container using this network

All ports of every container in the network are accessible, so -p does not have any effect.

docker run --network testnetwork ...

Attach a running container to a network

docker network connect testnetwork <ID of container>

Detach

docker network disconnect testnetwork <ID of container>

Container commands

List all containers

docker ps -a
docker container ls -a   # alternativ

# only running containers:
docker ps
docker container ls     # alternative

Show output of a container

docker logs  <docker id>

Stop container / restart

docker stop <containername>
docker restart <containername>

Stop container (faster) / delete container

docker kill <containername>
docker rm <containername>

Run command inside container

sudo docker exec -it container-name mysql -uroot -p

Import DB in container

sudo docker exec -i wp_db mysql -h 172.17.0.1 -P 3306 --protocol=tcp -uroot -p wp_baum < /var/www/wordpress-from-docker/wp_baum.sql

Backup DB in docker container

docker exec -it wordpress-baumfreunde_db_1 mysqldump --add-drop-table -uroot -pXXX wp_baum > /home/roru/wordpress-baumfreunde/wp_baum_backup.sql

Bash in container

sudo docker exec it <container-name> /bin/bash   # or /bin/sh for Alpine

# as root
docker exec -u root -it <container-name> /bin/bash

Copy file from host to docker container

sudo docker cp "file.txt" c30c199ec89c:/home/actions

Copy folder from docker container to host

sudo docker cp "c30c199ec89c:/home/actions/conf /home/rogrut

Get IP of docker container

docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name

General Docker commands

Cleanup

Deletes all images, volumes, networks, that are not linked to at least 1 running container, as well as the build cache.

docker system prune --all --volumes

Docker logs

journalctl -xu docker.service