I have three nodes with docker in swarm mode. I deploy from my machine using contexts to target remote host.
I have the following docker-compose.yml:
version: "3.9"
services:
...
nginx:
image: 'nginx:1.23.3-alpine'
ports:
- 8080:80
volumes:
- ./conf:/etc/nginx/conf.d
depends_on:
- ui
...
How can I deliver the ./conf directory to one of the docker hosts? I found an outdated and inconvenient way, but there are more recent solutions (declare directly in the docker-compose.yml)?
The simple answer is, you don't. Docker Compose does not support this directly.
However, there are options that involve varying amounts of refactoring of your deployment process and they include:
Create a root relative conf folder (/mnt/conf etc) on the remote server, and reference that. Deploy files via some other process (scp etc.)
Create a "conf" volume remotely, and populate it using a docker image that you build that carries the files. (There is a syntax to mount a filesystem from another container, IDK if its compose compatible, but you could just mount a container that you build with the contents of ./conf You will need a registry to store the image so you can build it locally, but reference it remotely. registry:2 is easy to deploy)
If "conf" contains 1 to a few files, then enable swarm mode remotely, and mount the individual files as docker configs. This means using docker -c remote stack deploy rather than docker -c remote compose up.
Make "conf" shared on a nfs server, and declare the volume using docker local volume drivers option that supports nfs (or other fstab compatible) mounts. Alternative put the files in a s3 bucket (AWS or using a product like minio) and use the same syntax to use the "s3fs" fuse driver (if you don't use a containerised fuse driver, the fs driver will need to be installed on the remote host)
Use an actual docker volume plugin (e.g. https://rclone.org/docker/) to mount a wide variety of network shares into a compose or swarm service.
Related
The project I am working on consists of multiple components, written in C#/.NET6, and deployed as docker containers on a Linux host. Each component has his own git repository on Gitlab and its Gitlab pipeline is building the docker container to the Gitlab 'container registry'. For instance, one component is called "runtime", another one "services", etc.
All docker containers are defined in in a docker-compose.yml file: the suite is started with a "docker compose up" command.
I created a 'integration test' project to check the data exchange between the running containers. I've a lot of complex Linux shell scripts to prepare the mock data and so on for the tests. I've a bunch of tests written in Python running in a py-env on the Linux host and also some other tests written in C# running in a dedicated docker container.
I've actually different test scenarios or test group: each group has his own docker-compose-integration_$group.yml file to set up e.g. the mocked services.
All of this is run with
docker compose -f docker-compose.yml -f docker-compose-integration_$group.yml up -d
In multiple services defined in the docker compose file, I set up docker volumes to be able to check the data generated by the containers within my tests. For instance, the following is an extract of my docker-compose-integration_4.yml file for the 4th group of tests using the C# written tests running in a dedicated 'integration-tests-dotnet' container:
runtime:
extends:
file: ./docker-compose.yml
service: runtime
volumes:
- ./runtime/config_4.ini}:/etc/runtime/config.ini
- ${OUTPUT:-./output}/runtime/:/runtime/output/
integration-tests-dotnet:
volumes:
# share config for current group same as for runtime.
- ./runtime/config_4.ini}:/etc/runtime/config.ini
# share output folder from runtime.
- ./testData/:/opt/testData/
- ./output/runtime/:/opt/runtime/output/
- ./output/services/:/opt/services/output/
# share report file generated from the tests.
- ./output/integration/:/app/output/
Everything is running nicely on a Linux machine, or on WSL2 in my Windows PC (or on the Mac of a colleague).
The integration-test project has his own Gitlab pipeline.
Now, we would like to be able to run the integration tests within the Giltab pipeline, i.e. run "docker compose" from a Gitlab runner.
I do have already a 'docker in docker' capable runner, and added such a job in my .gitlab-ci.yml
run-integration-tests:
stage: integration-tests
variables:
DOCKER_TLS_CERTDIR: ''
DOCKER_HOST: tcp://localhost:2375/
services:
- name: docker:20.10.22-dind
command: ["--tls=false"]
tags:
- dind
image: $CI_REGISTRY_IMAGE:latest
This job is starting properly, BUT fails with the volume sharing.
This question Docker in Docker cannot mount volume raised already the issue with volume sharing by using the shared docker socket. The docker volume is shared from the HOST (i.e. on my runner). But the data are unknown from my host: they are only meant to be shared between the integration-test and the other containers.
As Olivier wrote in that question, for a
host: H
docker container running on H: D
docker container running in D: D2
the docker compose with volume sharing is equivalent to
docker run ... -v <path-on-D>:<path-on-D2> ...
while I only something equivalent to the following can run:
docker run ... -v <path-on-H>:<path-on-D2> ...
But I have no data on H to share, I just want between D and D2!
Is the volume sharing on HOST limitation the same using my docker-in-docker runner as the shared socket?
If so, it seems I need to rework the infrastructure and the concept of volume sharing used here.
Some suggests a Docker data volume containers.
Maybe I should use more of the names volumes.
Mayber tmpfs volumes? I need to check the data AFTER some container are exited, but I don't know if a container still running but in "exited" status has sill the tmpfs volume activated.
Is my analysis correct?
Any other suggestions?
I've docker container build from debian:latest image.
I need to execute a bash script that will start several services.
My host machine is Windows 10 and I'm using Docker Desktop, I've found configuration files in
docker-desktop-data wsl2 drive in data\docker\containers\<container_name>
I've 2 config files there:
config.v2.json and hostcongih.json
I've edited the first of them and replaced:
"Entrypoint":null with "Entrypoint":["/bin/bash", "/opt/startup.sh"]
I have done it while the container was down, when I restarted it the script was not executed. When I opened config.v2.json file again the Entrypoint was set to null again.
I need to run this script at every container start.
Additional strange thing is that this container doesn't have any volume appearing in docker desktop. I can checkout this container and start another one, but I need to preserve current state of this container (installed packages, files, DB content). How can I change the entrypoint or run the script in other way?
Is there anyway to export the container to image alongside with it's configuration? I need to expose several ports and run the startup script. Is there anyway to make every new container made from the image exported from current container expose the same ports and run same startup script?
Docker's typical workflow involves containers that only run a single process, and are intrinsically temporary. You'd almost never create a container, manually set it up, and try to persist it; instead, you'd write a script called a Dockerfile that describes how to create a reusable image, and then launch some number of containers from that.
It's almost always preferable to launch multiple single-process containers than to try to run multiple processes in a single container. You can use a tool like Docker Compose to describe the multiple containers and record the various options you'd need to start them:
# docker-compose.yml
# Describe the file version. Required with the stable Python implementation
# of Compose. Most recent stable version of the file format.
version: '3.8'
# Persistent storage managed by Docker; will not be accessible on the host.
volumes:
dbdata:
# Actual containers.
services:
# The database.
db:
# Use a stock Docker Hub image.
image: postgres:15
# Persist its data.
volumes:
- dbdata:/var/lib/postgresql/data
# Describe how to set up the initial database.
environment:
POSTGRES_PASSWORD: passw0rd
# Make the container accessible from outside Docker (optional).
ports:
- '5432:5432' # first port any available host port
# second port MUST be standard PostgreSQL port 5432
# Reverse proxy / static asset server
nginx:
image: nginx:1.23
# Get static assets from the host system.
volumes:
- ./static:/usr/share/nginx/html
# Make the container externally accessible.
ports:
- '8000:80'
You can check this file into source control with your application. Also consider adding a third container that build: an image containing the actual application code; that probably will not have volumes:.
docker-compose up -d will start this stack of containers (without -d, in the foreground). If you make a change to the docker-compose.yml file, re-running the same command will delete and recreate containers as required. Note that you are never running an unmodified debian image, nor are you manually running commands inside a container; the docker-compose.yml file completely describes the containers, their startup sequences (if not already built into the images), and any required runtime options.
Also see Networking in Compose for some details about how to make connections between containers: localhost from within a container will call out to that same container and not one of the other containers or the host system.
I have mounted a volume shared to my service main.
Now I am trying to mount that same volume to another container client, that is started with docker-compose up client from within the main container (Docker-in-Docker):
version: "3.8"
# set COMPOSE_PROJECT_NAME=default before running `docker-compose up main`
services:
main:
image: rbird/docker-compose:python-3.9-slim-buster
privileged: true
entrypoint: docker-compose up client # start client
volumes:
- //var/run/docker.sock:/var/run/docker.sock
- ./docker-compose.yml:/docker-compose.yml
- ./shared:/shared
client:
image: alpine
entrypoint: sh -c "ls shared*"
profiles:
- do-not-run-directly
volumes:
- /shared:/shared1
- ./shared:/shared2
The output I get is:
[+] Running 2/2
- Network test_default Created 0.0s
- Container test_main_1 Started 0.9s
Attaching to main_1
Recreating default_client_1 ... done
Attaching to default_client_1
main_1 | client_1 | shared1:
main_1 | client_1 |
main_1 | client_1 | shared2:
main_1 | default_client_1 exited with code 0
main_1 exited with code 0
So the folders /shared2 and /shared2 are empty, although they contain files in the host directory as well as in the main container.
How do I re-share volumes between containers?
Or is there a way to share a host directory between all containers, even the ones started by one of the containers?
The cleanest answer here is to delete the main: container and the profiles: block for the client: container, and run docker-compose on the host directly.
The setup you have here uses the host's Docker socket. (It is not "Docker-in-Docker"; that setup generally is the even more confusing case of running a second Docker daemon in a container.) This means that the Docker Compose instance inside the container sends instructions to the host's Docker daemon telling it what containers to start. You're mounting the docker-compose.yml file in the container's root directory, so the ./shared path is interpreted relative to / as well.
This means the host's Docker daemon is receiving a request to create a container with /shared mounted on /shared1 inside the new container, and also with /shared (./shared, relative to the path /) mounted on /shared2. The host's Docker daemon creates this container using host paths. If you look on your host system, you will probably see an empty /shared directory in the host filesystem root, and if you create files there they will appear in the new container's /shared1 and /shared2 directories.
In general, there is no way to mount one container's filesystem to another. If you're trying to run docker (or docker-compose) from a container, you have to have external knowledge of which of your own filesystems are volumes mounts and what exactly has been mounted.
If you can, avoid both the approaches of containers launching other containers and of sharing volumes between containers. If it's possible to launch another container, and that other container can mount arbitrary parts of the host filesystem, then you can pretty trivially root the entire host. In addition to the security concerns, the path complexities you note here are difficult to get around. Sharing volumes doesn't work well in non-Docker environments (in Kubernetes, for example, it's hard to get a ReadWriteMany volume and containers generally won't be on the same host as each other) and there are complexities around permissions and having multiple readers and writers on the same files.
Instead, launch docker and docker-compose commands on the host only (as a privileged user on a non-developer system). If one container needs one-way publishing of read-only content to another, like static assets, create a custom image COPY --from= one image to the other. Otherwise consider using purpose-built network-accessible storage (like a database) that doesn't specifically depend on a filesystem and knows how to handle concurrent access.
For a service I've defined a volume as (an extract of my yml file)
services:
wordpress:
volumes:
- wp_data:/var/www/html
networks:
- wpsite
networks:
wpsite:
volumes:
wp_data:
driver: local
I'm aware on a Windows 10 filesystem that the WP volumes won't be readily visible to me as they'll exist within the linux VM. Alternatively I'd have to provide a path argument to be able view my WP installation e.g.
volumes:
- ./mysql:/var/lib/mysql
But my question is what is the point of the 'driver: local' option, is this default. I've tried with & without this option and can't see the difference.
Secondly what does this do? In my yml file I've commented it out to no ill effect that I can see!?
networks:
wpsite:
First question:
The --driver or -d option defaults to local. driver: local is redundant. On Windows, the local driver does not support any options. If you were running docker on a Linux machine, you would've had some options: Official documentation here - https://docs.docker.com/engine/reference/commandline/volume_create/
Second question:
In each section networks:/volumes:/services: you basically declare the resources you need for your deployment.
In other words, creating an analogy with a virtual machine, you can think about it like this: you need to create a virtual disk named wp_data and a virtual network named wpsite.
Then, you want your wordpress service, to mount the the wp_data disk under /var/www/html and to connect to the wpsite subnet.
You can use the following docker commands to display the resources that are created behind the scenes by your compose file:
docker ps - show containers
docker volume ls - show docker volumes
docker network ls - show docker networks
Hint: once you created a network or a volume, unless you manually delete it, it will not be destroyed automatically. You can clean-up manually the resources and experiment yourself by removing/adding more resources from your compose file.
Updated to answer question in comment:
If you run your docker on a Windows host, you probably enabled hyper-v. This allowed Windows to create a Linux VM, on top of which your docker engine is running.
With the docker engine installed, docker can then create "virtual resources" such as virtual networks, virtual disks(volumes), containers(people often compare containers to VMs), services, containers etc.
Let's look at the following section from your compose file:
volumes:
wp_data:
driver: local
This will create a virtual disk managed by docker, named wp_data. The volume is not created directly on your Windows host file system, but instead it is being created inside the Linux VM that is running on top of the HyperV that is running on your Windows host. If you want to know precisely where, you can either execute docker inspect <containerID> and look for the mounts that you have on that container, or docker volume ls then docker volume inspect <volumeID> and look for the key "Mountpoint" to get the actual location.
There seems to be sparse conflicting information around on this subject. Im new to Docker and need some help. I have several docker containers to run an application, some require different config files for local development as they do for production. I don't seem to be able to find a neat way to automate this with Docker.
My containers that include custom config are Nginx, Freeradius and my code/data container is Laravel therefore requires a .env.php file (L4.2 at the moment).
I have tried Dockers environment variables in docker compose:
docker-compose.yml:
freeradius:
env_file: ./env/freeradius.env
./env/freeradius.env
DB_HOST=123.56.12.123
DB_DATABASE=my_database
DB_USER=me
DB_PASS=itsasecret
Except I can't pick those variables up in /etc/freeradius/mods-enabled/sql where they need to be.
How can I get Docker to run as a 'local' container with local config, or as a 'production' container with production config without having to actually build different containers, and without having to attach to each container to manually config them. I need it automated as this is to eventually be used on quite a large production environment which will have a large cluster of servers with many instances.
Happy to learn Ansible if this is how people achieve this.
If you can't use environment variables to configure the application (which is my understanding of the problem), then the other option is to use volumes to provide the config files.
You can use either "data volume containers" (which are containers with the sole purpose of sharing files and directories) with volumes_from, or you can use a named volume.
Data Volume container
If the go with the "data volume container" route, you would create a container with all the environment configuration files. Every service that needs a file uses volumes_from: - config. In dev you'd have something like:
configs:
build: dev-configs/
freeradius:
volumes_from:
- configs
The dev-configs directory will need a Dockerfile to build the image, which will have a bunch of VOLUME directives for all the config paths.
For production (and other environments) you can create an override file which replaces the configs service with a different container:
docker-compose.prod.yml:
configs:
build: prod-configs/
You'll probably have other settings you want to change between dev and prod, which can go into this file as well. Then you run compose with the override file:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
You can learn more about this here: http://docs.docker.com/compose/extends/#multiple-compose-files
Named Volume
If you go with the "named volume" route, it's a bit easier to configure. On dev you create a volume with docker volume create thename and put some files into it. In your config you use it directly:
freeradius:
volumes:
- thename:/etc/freeradius/mods-enabled/sql
In production you'll either need to create that named volume on every host, or use a volume driver plugin that supports multihost (I believe flocker is one example of this).
Runtime configs using Dockerize
Finally, another option that doesn't involve volumes is to use https://github.com/jwilder/dockerize which lets you generate the configs at runtime from environment variables.