12 KiB
Table of Contents
- Docker
- Summary
- Advantages of Docker
- Key Concepts of Docker
- How Docker Works
- Workflow example
- Docker Image
- Docker Hub
- Docker Container
- Docker Volumes
- Docker Befehle
- Anzeigen aller Container
- Anzeigen aller Images
- Ausgabe eines Containers anzeigen
- Docker logs
- Container starten
- Container stoppen / neu starten
- Befehl in Docker Container ausführen
- Import DB in docker container
- Backup DB in docker container
- Bash in container
- Copying files/folders
- IP des Containers auslesen
- Container entfernen
- Daten Teilen
- 1. Möglichkeit: Verzeichnisse mappen (Host-Verzeichnis im Container mappen)
- 2. Möglichkeit: Daten-Container verwenden (zwischen Container und Container)
- Container verlinken
- Docker compose
Docker
Installation: See separate page Docker Installation
Summary
Docker simplifies the development and deployment of applications by providing a lightweight, portable, and consistent containerized environment. It bridges the gap between development and production, enabling developers to focus on building applications without worrying about environment-specific issues. The applications run consistently across different computing environments, whether on a developer's laptop, a test server, or in production.
Advantages of Docker
- Portability: Containers ensure applications behave the same regardless of the environment (development, testing, production).
Efficiency:
Containers use shared OS resources, making them faster and less resource-intensive compared to VMs.
Scalability:
Docker enables rapid scaling of applications by spinning up multiple container instances as needed.
Isolation:
Each container runs independently, preventing conflicts between applications.
Key Concepts of Docker
Containers:
Containers are lightweight virtualized environments that package an application along with its libraries, dependencies, and configuration files. Unlike traditional virtual machines (VMs), containers share the host system's kernel, making them faster and more resource-efficient.
Images:
Docker images are the building blocks for containers. An image is a static snapshot of an environment that contains all necessary dependencies for an application. Images are created using a Dockerfile and can be stored and shared via a Docker registry like Docker Hub.
Docker Engine:
The Docker Engine is the runtime responsible for building and running containers.
Dockerfile:
A text file containing instructions to build a Docker image (e.g., which base image to use, dependencies to install, files to copy).
Docker Compose:
A tool to define and run multi-container applications using a YAML file.
How Docker Works
Build Phase:
Developers write a Dockerfile to specify the base image (e.g., Ubuntu, Node.js, Python) and define how the application and its dependencies should be configured. Using the command docker build, Docker creates a layered image.
Run Phase:
Using the docker run command, Docker launches a container based on the built image. Containers start in a matter of seconds.
Networking:
Docker creates isolated networks for containers to communicate with each other and the outside world securely.
Storage:
Docker provides volumes for persistent storage, ensuring data remains even if a container is restarted or removed.
Container Orchestration:
Tools like Docker Compose and Kubernetes are used to manage and scale multiple containers in production environments.
Workflow example
- Write a Dockerfile to package the application.
- Build the Docker image using docker build.
- Run the image as a container using docker run.
- Use Docker Compose to manage multiple containers for a complete application (e.g., web server + database).
Docker Image
Docker images are the building blocks for containers. An image is a static snapshot of an environment that contains all necessary dependencies for an application.
Images are created using a Dockerfile and can be stored and shared via a Docker registry like Docker Hub.
Dockerfile
Dockerifle ist eine einfache Textdatei, mit der man eigene Images bauen kann. Sie basieren immer auf einem bestehenden base Image (z.B. nginx:latest). Mit docker build wird das image erstellt, bevor man es mit docker run starten kann.
Building Image
Im Ordner wo das Dockerfile liegt, ausführen: docker build -t node-app:1.0 ., wobei node-app ein x-beliebiger Name ist für das image, und anschl. die Version. Dann starten mit docker run -d -p 80:3000 node-app:1.0 wenn man es auf Port 80 von aussen laufen lassen will.
Dockerfile Doku unter https://docs.docker.com/reference/builder
Docker Hub
hier gibt es vorgefertigte Images.
$ docker login
$ docker pull
etc.
Docker Container
Ein Container ist ein Image, welches gerade ausgeführt wird. Wenn ein Image mit docker run nginx ausgeführt wird, spricht man von einem Container. Es ist vergleichbar mit einem Prozess. Container wird auf Basis eines Ausgangs-Images gestartet.
Docker Volumes
There are three volume types:
- Docker volumes which are stored and handled internally by docker (c.f. docker config to choose where they are actually stored).
version: '3.9'
services:
caddy:
image: caddy:2.6.2
volumes:
- caddy_data:/data
volumes:
caddy_data
- Bind mounts which are direct access to the host file system from a container
version: '3.9'
services:
caddy:
image: caddy:2.6.2
volumes:
- /opt/docuteam/ssl/certifcate.pem:/cert.pem:ro</code>
3. Bind mounts of remote share which are defined through docker volumes
<code>version: '3.9'
services:
fedora:
image: docker.cloudsmith.io/docuteam/docker/fcrepo:6.2.0
volumes:
- fedora_data:/fcrepo_home
volumes:
fedora_data:
driver_opts:
type: cifs
device: //remote-hostname.com/path/to/share/fedora
o: addr=remote-hostname.com,username=user,password=mysuperpassword,nodev,noexec,nosuid,vers=2.1,uid=1000,gid=1000
Docker Befehle
Anzeigen aller Container
sudo docker ps -a
Nur laufende:
sudo docker ps
Anzeigen aller Images
sudo docker images
Ausgabe eines Containers anzeigen
docker logs <docker id>
Docker logs
journalctl -xu docker.service
Container starten
docker run --name Test_run ubuntu:20.04
Container stoppen / neu starten
docker stop
docker restart
Befehl in Docker Container ausführen
Z.B. MySQL, wobei wordpress-baumfreunde_db_1 der Container-Name ist, den man mit docker ps herausfindet.
sudo docker exec -it wordpress-baumfreunde_db_1 mysql -uroot -p
Import DB in docker container
sudo docker exec -i wp_db mysql -h 172.17.0.1 -P 3306 --protocol=tcp -uroot -p wp_baum < /var/www/wordpress-from-docker/wp_baum.sql
Backup DB in docker container
docker exec -it wordpress-baumfreunde_db_1 mysqldump --add-drop-table -uroot -pXXX wp_baum > /home/roru/wordpress-baumfreunde/wp_baum_backup.sql
Bash in container
sudo docker exec –it <container-name> /bin/bash
# Alpine
sudo docker exec –it <container-name> /bin/sh
# als root
docker exec -u root -it <container-name> /bin/bash
Copying files/folders
Copy file from host to docker
sudo docker cp "file.txt" c30c199ec89c:/home/actions
Copy folder from docker to host
sudo docker cp "c30c199ec89c:/home/actions/conf /home/rogrut
IP des Containers auslesen
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name
Container entfernen
Zum Container entfernen: docker stop Test_run und docker rm Test_run.
Daten Teilen
1. Möglichkeit: Verzeichnisse mappen (Host-Verzeichnis im Container mappen)
docker run -v <HOST DIR>:<DOCKER DIR>
Konkret:
$ docker run -it -v ~/Desktop/LokaleDaten:/home/daten --name Datentest ubuntu:20.04 /bin/bash
2. Möglichkeit: Daten-Container verwenden (zwischen Container und Container)
Datencontainer mit Volume /data/db erstellen
docker create -v /data/db --name datenbox busybox true
Neuen Container für Anwendung erstellen
$ docker run -it --volumes-from datenbox ubuntu:20.04 /bin/bash
$ cd /data/db
$ touch datei.txt
$ exit
Die Datei ist jetzt im Datencontainer unter /data/db. Der Datencontainer muss gar nicht gestartet werden um ihn zu verwenden.
Container verlinken
Ports verbinden
Beispiel Image martinblaha/testapp starten, localhost Port 4000, docker Port 1337
docker run -it -d --name myApp -p 4000:1337 martinblaha/testapp sh -c 'cd /var/www/test/myapp && sails lift'
Environment Variablen
Variante 1
$ docker run -it --name TestEnv -e MY_VAR="hello world" ubunut:20.04 /bin/bash
$ env
Variante 2
So wird der Wert von MY_VAR vom hostsystem automatisch mitgegeben:
$ docker run -it --name TestEnv -e MY_VAR ubunut:20.04 /bin/bash
$ env
Variante 3
Env Variable Liste übergeben
$ docker run -it --name TestEnv -env-file ./env.list ubunut:20.04 /bin/bash
$ env
Link zwischen 2 Container
Sicherer Tunnel zwischen beiden Container. Ports müssen nicht geöffnet werden. Dabei geschieht folgendes:
- Umgebungsvariablen des Quellcontainers werden im Zielcontainer veröffentlicht.
- Einträge in ''/etc/hosts'' des Zielcontainers gesetzt, die zum Quellcontainer führen.
Beispiel:
Ubuntu 20.04 Image erstellen und mit mongodb:mongo Container verlinken.
mongodb: NAMES, mongo: IMAGE (sieht man, wenn man docker ps macht)
$ docker run -it -P --link mongodb:mongo ubuntu:20.04 /bin/bash
Docker compose
- Purpose: Defines and manages multi-container Docker applications.
- Usage: Orchestrates multiple services (containers), networks, and volumes for an application.
- Key Features:
- Describes how to run one or more containers together.
- Can manage container networking and persistent storage.
- Useful for applications with multiple services (e.g., a web app + database).
- Output: A running application consisting of multiple containers.
docker-compose.yml is the file which includes all nescessary information. It can include multiple services like web (built from a Dockerfile) and db (pulled from Docker Hub).
Image Location
services:
nproxy-app:
image: 'jc21/nginx-proxy-manager:latest'
Docker Hub:
By default, Docker pulls images from Docker Hub, the default public registry for Docker images.
In the example image: 'jc21/nginx-proxy-manager:latest' Docker will search for the image jc21/nginx-proxy-manager on Docker Hub and pull the latest tag (or version).
Other Registries:
If the image is hosted on a different container registry (e.g., Amazon Elastic Container Registry, Google Container Registry, or a private registry), you must provide the full registry URL as a prefix, like e.g. image: 'myregistry.example.com/myimage:latest'. Docker will pull the image from myregistry.example.com.
local cache
Before attempting to download the image, Docker checks if the image already exists locally. If found, it uses the local copy.
Authentication
If the registry requires authentication, you must log in using docker login <registry_url> or configure credentials in the Docker Compose file.
Local Image
Don't use image, but build.
services:
my-local-app:
build: .
build: . tells Docker Compose to look for a Dockerfile in the same directory as the docker-compose.yml.
If the Dockerfile is in a subdirectory, specify the context and Dockerfile path:
services:
my-local-app:
build:
context: ./my-app
dockerfile: Dockerfile.dev
context:Specifies the directory containing the application code.dockerfile:Specifies the name of the Dockerfile if it’s not the default Dockerfile.
You can mix local builds and images from a registry in the same docker-compose.yml file:
services:
my-local-app:
build: .
ports:
- "8080:8080"
redis:
image: redis:latest
Commands
Start
Ins Verzeichnis gehen wo docker-compose.yml liegt, und dann docker-compose start -d. Mit -d wird es im Hintergrund ausgeführt.
Stop
- The
docker-compose stopcommand will stop your containers, but it won’t remove them. - The
docker-compose downcommand will stop your containers, but it also removes the stopped containers as well as any networks that were created. - You can take down 1 step further and add the
-vflag to remove all volumes too. This is great for doing a full blown reset on your environment by runningdocker-compose down -v.
Events
docker compose events