How do I re-share volumes between Docker-in-Docker containers? - docker

I have mounted a volume shared to my service main.
Now I am trying to mount that same volume to another container client, that is started with docker-compose up client from within the main container (Docker-in-Docker):
version: "3.8"
# set COMPOSE_PROJECT_NAME=default before running `docker-compose up main`
services:
main:
image: rbird/docker-compose:python-3.9-slim-buster
privileged: true
entrypoint: docker-compose up client # start client
volumes:
- //var/run/docker.sock:/var/run/docker.sock
- ./docker-compose.yml:/docker-compose.yml
- ./shared:/shared
client:
image: alpine
entrypoint: sh -c "ls shared*"
profiles:
- do-not-run-directly
volumes:
- /shared:/shared1
- ./shared:/shared2
The output I get is:
[+] Running 2/2
- Network test_default Created 0.0s
- Container test_main_1 Started 0.9s
Attaching to main_1
Recreating default_client_1 ... done
Attaching to default_client_1
main_1 | client_1 | shared1:
main_1 | client_1 |
main_1 | client_1 | shared2:
main_1 | default_client_1 exited with code 0
main_1 exited with code 0
So the folders /shared2 and /shared2 are empty, although they contain files in the host directory as well as in the main container.
How do I re-share volumes between containers?
Or is there a way to share a host directory between all containers, even the ones started by one of the containers?

The cleanest answer here is to delete the main: container and the profiles: block for the client: container, and run docker-compose on the host directly.
The setup you have here uses the host's Docker socket. (It is not "Docker-in-Docker"; that setup generally is the even more confusing case of running a second Docker daemon in a container.) This means that the Docker Compose instance inside the container sends instructions to the host's Docker daemon telling it what containers to start. You're mounting the docker-compose.yml file in the container's root directory, so the ./shared path is interpreted relative to / as well.
This means the host's Docker daemon is receiving a request to create a container with /shared mounted on /shared1 inside the new container, and also with /shared (./shared, relative to the path /) mounted on /shared2. The host's Docker daemon creates this container using host paths. If you look on your host system, you will probably see an empty /shared directory in the host filesystem root, and if you create files there they will appear in the new container's /shared1 and /shared2 directories.
In general, there is no way to mount one container's filesystem to another. If you're trying to run docker (or docker-compose) from a container, you have to have external knowledge of which of your own filesystems are volumes mounts and what exactly has been mounted.
If you can, avoid both the approaches of containers launching other containers and of sharing volumes between containers. If it's possible to launch another container, and that other container can mount arbitrary parts of the host filesystem, then you can pretty trivially root the entire host. In addition to the security concerns, the path complexities you note here are difficult to get around. Sharing volumes doesn't work well in non-Docker environments (in Kubernetes, for example, it's hard to get a ReadWriteMany volume and containers generally won't be on the same host as each other) and there are complexities around permissions and having multiple readers and writers on the same files.
Instead, launch docker and docker-compose commands on the host only (as a privileged user on a non-developer system). If one container needs one-way publishing of read-only content to another, like static assets, create a custom image COPY --from= one image to the other. Otherwise consider using purpose-built network-accessible storage (like a database) that doesn't specifically depend on a filesystem and knows how to handle concurrent access.

Related

Remote & Local Volume Paths on Docker Machine

I have three nodes with docker in swarm mode. I deploy from my machine using contexts to target remote host.
I have the following docker-compose.yml:
version: "3.9"
services:
...
nginx:
image: 'nginx:1.23.3-alpine'
ports:
- 8080:80
volumes:
- ./conf:/etc/nginx/conf.d
depends_on:
- ui
...
How can I deliver the ./conf directory to one of the docker hosts? I found an outdated and inconvenient way, but there are more recent solutions (declare directly in the docker-compose.yml)?
The simple answer is, you don't. Docker Compose does not support this directly.
However, there are options that involve varying amounts of refactoring of your deployment process and they include:
Create a root relative conf folder (/mnt/conf etc) on the remote server, and reference that. Deploy files via some other process (scp etc.)
Create a "conf" volume remotely, and populate it using a docker image that you build that carries the files. (There is a syntax to mount a filesystem from another container, IDK if its compose compatible, but you could just mount a container that you build with the contents of ./conf You will need a registry to store the image so you can build it locally, but reference it remotely. registry:2 is easy to deploy)
If "conf" contains 1 to a few files, then enable swarm mode remotely, and mount the individual files as docker configs. This means using docker -c remote stack deploy rather than docker -c remote compose up.
Make "conf" shared on a nfs server, and declare the volume using docker local volume drivers option that supports nfs (or other fstab compatible) mounts. Alternative put the files in a s3 bucket (AWS or using a product like minio) and use the same syntax to use the "s3fs" fuse driver (if you don't use a containerised fuse driver, the fs driver will need to be installed on the remote host)
Use an actual docker volume plugin (e.g. https://rclone.org/docker/) to mount a wide variety of network shares into a compose or swarm service.

Docker editing entrypoint of existing container

I've docker container build from debian:latest image.
I need to execute a bash script that will start several services.
My host machine is Windows 10 and I'm using Docker Desktop, I've found configuration files in
docker-desktop-data wsl2 drive in data\docker\containers\<container_name>
I've 2 config files there:
config.v2.json and hostcongih.json
I've edited the first of them and replaced:
"Entrypoint":null with "Entrypoint":["/bin/bash", "/opt/startup.sh"]
I have done it while the container was down, when I restarted it the script was not executed. When I opened config.v2.json file again the Entrypoint was set to null again.
I need to run this script at every container start.
Additional strange thing is that this container doesn't have any volume appearing in docker desktop. I can checkout this container and start another one, but I need to preserve current state of this container (installed packages, files, DB content). How can I change the entrypoint or run the script in other way?
Is there anyway to export the container to image alongside with it's configuration? I need to expose several ports and run the startup script. Is there anyway to make every new container made from the image exported from current container expose the same ports and run same startup script?
Docker's typical workflow involves containers that only run a single process, and are intrinsically temporary. You'd almost never create a container, manually set it up, and try to persist it; instead, you'd write a script called a Dockerfile that describes how to create a reusable image, and then launch some number of containers from that.
It's almost always preferable to launch multiple single-process containers than to try to run multiple processes in a single container. You can use a tool like Docker Compose to describe the multiple containers and record the various options you'd need to start them:
# docker-compose.yml
# Describe the file version. Required with the stable Python implementation
# of Compose. Most recent stable version of the file format.
version: '3.8'
# Persistent storage managed by Docker; will not be accessible on the host.
volumes:
dbdata:
# Actual containers.
services:
# The database.
db:
# Use a stock Docker Hub image.
image: postgres:15
# Persist its data.
volumes:
- dbdata:/var/lib/postgresql/data
# Describe how to set up the initial database.
environment:
POSTGRES_PASSWORD: passw0rd
# Make the container accessible from outside Docker (optional).
ports:
- '5432:5432' # first port any available host port
# second port MUST be standard PostgreSQL port 5432
# Reverse proxy / static asset server
nginx:
image: nginx:1.23
# Get static assets from the host system.
volumes:
- ./static:/usr/share/nginx/html
# Make the container externally accessible.
ports:
- '8000:80'
You can check this file into source control with your application. Also consider adding a third container that build: an image containing the actual application code; that probably will not have volumes:.
docker-compose up -d will start this stack of containers (without -d, in the foreground). If you make a change to the docker-compose.yml file, re-running the same command will delete and recreate containers as required. Note that you are never running an unmodified debian image, nor are you manually running commands inside a container; the docker-compose.yml file completely describes the containers, their startup sequences (if not already built into the images), and any required runtime options.
Also see Networking in Compose for some details about how to make connections between containers: localhost from within a container will call out to that same container and not one of the other containers or the host system.

Jenkins in Docker: clarification about bind mounts in pipelines

I'm running Jenkins in a Docker container. Following this article, I'm bind mounting the Docker socket in order to interact with it from the dockerized Jenkins. I'm also bind mounting the container directory jenkins_home. Here is a quick recap on my volumes:
# Jenkins
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin/docker-compose:/usr/local/bin/docker-compose
- ./bar:/var/jenkins_home
I run this from the directory /home/foo/ of the host, therefore the following directory is created in the host file system (and mounted):
/home/foo/bar
Now, I have a Jenkins pipeline (mypipe) that runs a docker-compose file spinning up a MySQL container with the following volume:
# MySQL created from Jenkins
volumes:
- ./data:/var/lib/mysql
Weirdly enough, it ends up mounting:
/var/jenkins_home/workspace/mypipe/data < /var/lib/mysql
instead of:
/home/foo/bar/workspace/mypipe/data < /var/lib/mysql
Here is a graphical recap:
Searching stackoverflow, it turned out that it happens since:
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
And that's ok, but my question is:
Why there?
I mean why does .data is translated exactly into the path: /var/jenkins_home/workspace/…/data, since the MySQL container is not aware of the path /var/jenkins_home?
When Docker creates a bind mount, it is always from an absolute path in the host filesystem to an absolute path in the container filesystem.
When your docker-compose.yml names a relative path, Compose first expands that path before handing it off to the Docker daemon. In your example, you're trying to bind-mount ./bar from a file /var/jenkins_home/workspace/mypipe/docker-compose.yml, so Compose fills in the absolute path you see when it invokes the Docker API. Compose has no idea that the current directory is actually a bind-mount from a different path in the Docker daemon's context.
If you look in the Jenkins logs at what scripted pipeline invocations like docker.inside { ... } do, mounts the workspace directory to an identical path inside the container it launches. Probably the easiest way to work around the mapping problem you're having is to use an identical /var/jenkins_home path on the host system, so the filesystem path is the same in every context.

How to avoid the "Docker cannot link to a non running container" error when the external-linked container is actually running using docker-compose

What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file

Docker Anonymous Volumes

I've seen Docker volume definitions in docker-compose.yml files like so:
-v /path/on/host/modules:/var/www/html/modules
I noticed that Drupal's official image, their docker-compose.yml file is using anonymous volumes.
Notice the comments:
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
Is there a way to associate an anonymous volume with a path on the host machine after the container is running? If not, what is the point of having anonymous volumes?
Full docker-compose.yml example:
version: '3.1'
services:
drupal:
image: drupal:8.2-apache
ports:
- 8080:80
volumes:
- /var/www/html/modules
- /var/www/html/profiles
- /var/www/html/themes
# this takes advantage of the feature in Docker that a new anonymous
# volume (which is what we're creating here) will be initialized with the
# existing content of the image at the same location
- /var/www/html/sites
restart: always
postgres:
image: postgres:9.6
environment:
POSTGRES_PASSWORD: example
restart: always
Adding a bit more info in response to a follow-up question/comment from #JeffRSon asking how anonymous volumes add flexibility, and also to answer this question from the OP:
Is there a way to associate an anonymous volume with a path on the host machine after the container is running? If not, what is the point of having anonymous volumes?
TL;DR: You can associate a specific anonymous volume with a running container via a 'data container', but that provides flexibility to cover a use case that is now much better served by the use of named volumes.
Anonymous volumes were helpful before the addition of volume management in Docker 1.9. Prior to that, you didn't have the option of naming a volume. With the 1.9 release, volumes became discrete, manageable objects with their own lifecycle.
Before 1.9, without the ability to name a volume, you had to reference it by first creating a data container
docker create -v /data --name datacontainer mysql
and then mounting the data container's anonymous volume into the container that needed access to the volume
docker run -d --volumes-from datacontainer --name dbinstance mysql
These days, it's better to use named volumes since they are much easier to manage and much more explicit.
Anonymous volumes are equivalent to having these directories defined as VOLUME's in the image's Dockerfile. In fact, directories defined as VOLUME's in a Dockerfile are anonymous volumes if they are not explicitly mapped to the host.
The point of having them is added flexibility.
PD:
Anonymous volumes already reside in the host somewhere in /var/lib/docker (or whatever directory you configured). To see where they are:
docker inspect --type container -f '{{range $i, $v := .Mounts }}{{printf "%v\n" $v}}{{end}}' $CONTAINER
Note: Substitute $CONTAINER with the container's name.
One possible usecase of anonymous volumes in these days is in combination with Bind Mounts. When you want to bind some folder but without any specific subfolders. These specific subfolders should be then set as named or anonymous volumes. It will guarantee that these subfolders will be present in your container folder which is bounded outside the container but you do not have to have it in your bound folder on the host machine at all.
For example you can have your frontend NodeJS project built in container where is needed node_modules folder for it but you dont need this folder for your coding at all. You can then map your project folder to some folder outside the container and set the node_modules folder as an anonymous volume. Node_modules folder will be present in the container all the time even if you do not have it on the host machine in your working folder.
Not sure why Drupal developers suggest such settings. Anyways, I can think of two differences:
With named volumes you have a name that suggests to which project it belongs.
After docker-compose down && docker-compose up -d a new empty anonymous volume gets attached to the container. (But the old one doesn't disappear. docker doesn't delete volumes unless you tell it to.) With named volumes you'll get the volume that was attached to the container before docker-compose down.
As such, you probably don't want to put data you don't want to lose into an anonymous volume (like db or something). Again, they won't disappear by themselves. But after docker-compose down && docker-compose up -d && docker volume prune a named volume will survive.
For something less critical (like node_modules) I don't have strong argument for or against named volumes.
Is there a way to associate an anonymous volume with a path on the host machine after the container is running?
For that you need to change the settings, e.g. /var/www/html/modules -> ./modules:/var/www/html/modules, and do docker-compose up -d. But that will turn an anonymous volume into a bind mount. And you will need to copy the data from the volume to ./modules. Similarly, you can turn an anonymous volume into a named volume.

Resources