I want to shared a volume between 2 docker-compose files. There are 2 interconnected apps and I need to create a symlink between them.
I tried using named volumes and the external feature.
On the first container, I can see the contents of the /var/www/s1 folder, but the /var/www/s2 folder is empty, while on the second container I can see the contents of the /var/www/s2 folder, but the /var/www/s1 folder seems empty. Since I can't see the contents of the folder created by the other app in /var/www, I can't do a symlink.
I made some dummy docker-compose files to try to expose the problem in an easier way.
In /var/www/s1 there should be a "magazine.txt" file, while in /var/www/s2 there should be a "paper.txt" file.
The first docker-compose file looks like this:
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ../:/var/www/s1
- shared-s:/var/www
volumes:
shared-s:
name: shared-s
The second docker-compose file looks like this:
version: '3.8'
services:
php:
image: php
container_name: php
command: tail -F /var/www/s2/paper.txt
volumes:
- ../:/var/www/s2
- shared-s:/var/www
volumes:
shared-s:
external:
name: shared-s
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80b83a60a0e5 php "docker-php-entrypoi…" 2 seconds ago Up 1 second php
05addf1fc24e nginx "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 80/tcp nginx
8c596d21cf7b portainer/portainer "/portainer" 2 hours ago Up About a minute 9000/tcp, 0.0.0.0:9001->9001/tcp portainer
$ docker exec -it 05addf1fc24e sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
docker magazine.txt
# cd ..
# cd s2
# ls
# exit
$ docker exec -it 80b83a60a0e5 sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
# cd ..
# cd s2
# ls
docker paper.txt
# exit
At a mechanical level, volumes and bind mounts don't "nest" in the way you're suggesting. The named volume shared-s will wind up containing only empty directories s1 and s2, but none of the content from either host directory.
What happens is something like this:
Docker starts (say) the nginx container first. It sorts the volumes: mounts on that container from shortest to longest.
Since the shared-s volume is empty, the content from the nginx base image in /var/www is copied to the volume; then the volume is mounted on /var/www in the container.
Docker creates the mount point /var/www/s1 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
Docker starts the php container and sorts its volumes:.
Since the shared-s volume is not empty, Docker just mounts it into the container, hiding any content that might have been in /var/www in the image.
Docker creates the mount point /var/www/s2 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
You'll notice a couple of problems with this sequence. Other mounted volumes' content never gets copied into the "shared" volume, which breaks the file sharing you're attempting here. Whichever container starts up first copies content from its image into the shared volume, and anything in that image in the other container gets lost. For that matter, if there is an update in the base image, Docker will ignore that in favor of the (old) content that's in the shared volume.
I'd suggest avoiding volumes here entirely. Build a separate image for each container, COPYing your application code into it. If you can use a static file server in the backend application, that will be much easier than trying to copy files from one container to the other. If that's not avoidable, you can use the COPY --from=image syntax that's normally used with multi-stage builds to also copy content from one built image to another.
Related
I'm trying to understand volumes.
When I build and run this image with docker build -t myserver . and docker run -dp 8080:80 myserver, the web server on it prints "Hallo". When I change "Hallo" to "Huhu" in the Dockerfile and rebuild & run the image/container, it shows "Huhu". So far, no surprises.
Next, I added a docker-compose.yaml file that has two volumes. One volume is mounted on an existing path of where the Dockerfile creates the index.html. The other is mounted on a new and unused path. I build and run everything with docker compose up --build.
On the first build, the web server prints "Hallo" as expected. I can also see the two volumes in Docker GUI and its contents. The index.html that was written to the image, is now present in the volume. (I guess the volume gets mounted before the Dockerfile can write to it.)
On the second build (swap "Hallo" with "huhu" and run docker compose up --build again) I was expecting the webserver to print "Huhu". But it prints "Hallo". So I'm not sure why the data on the volume was not overwritten by the Dockerfile.
Can you explain?
Here are the files:
Dockerfile
FROM nginx
# First build
RUN echo "Hallo" > /usr/share/nginx/html/index.html
# Second build
# RUN echo "Huhu" > /usr/share/nginx/html/index.html
docker-compose.yaml
services:
web:
build: .
ports:
- "8080:80"
volumes:
- html:/usr/share/nginx/html
- persistent:/persistent
volumes:
html:
persistent:
There are three different cases here:
When you build the image, it knows nothing about volumes. Whatever string is in that RUN echo line, it is stored in the image. Volumes are not mounted when you run the docker-compose build step, and the Dockerfile cannot write to a volume at all.
The first time you run a container with the volume mounted, and the first time only, if the volume is empty, Docker copies content from the mount point in the image into the volume. This only happens with named volumes and not bind mounts; it only happens on native Docker and not Kubernetes; the volume content is never updated at all after this happens.
The second time you run a container with the volume mounted, since the volume is already populated, the content from the volume hides the content in the image.
You routinely see various cases that uses named volumes to "pass through" to the image (especially Node applications) or to "share files" with another container (frequently an Nginx server). These only work because Docker (and only Docker) automatically populates empty named volumes, and therefore they only work the first time. If you change your package.json, your Node application that mounts a volume over node_modules won't see updates; if you change your static assets that you're sharing with a Web server, the named volume will hide those changes in both the application and HTTP-server containers.
Since the named-volume auto-copy only happens in this one very specific case, I'd try to avoid using it, and more generally try to avoid mounting anything over non-empty directories in your image.
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
Here is a simplified version of my docker-compose.yml (it's the volume in buggy-service that does not behave as I expect):
version: '3.4'
services:
local-db:
image: postgres:9.6
environment:
- DB_NAME=${DB_NAME}
# other env vars (not important)
ports:
- 5432:5432
volumes:
- ~/.docker-volumes/${DB_NAME}/postgresql/data:/var/lib/postgresql/data
- postgresql:/docker-entrypoint-initdb.d
buggy-service:
build:
context: .
dockerfile: Dockerfile.test
target: buggy-image
args:
# bunch of args (not important)
volumes:
- /Users/me/temp:/temp
volumes:
postgresql:
driver_opts:
type: none
device: /Users/me/postgresql
o: bind
If I do docker-compose -f docker-compose.yml up -d local-db, a container for it starts up automatically and I find that /Users/me/postgresql on the host machine (Mac OSX) binds correctly to /docker-entrypoint-initdb.d with content synced.
However, if I do docker-compose -f docker-compose.yml up --build -d buggy-service, a container does not start up automatically.
Question: How do I get buggy-service to behave like local-db, i.e., start up automatically with the required volume mounted?
Here's the stripped down version of Dockerfile.test referenced by buggy-service:
FROM microsoft/dotnet:2.1-sdk-alpine AS buggy-image
# Bunch of ARG definitions (not important)
VOLUME /temp
# other stuff (not important)
ENTRYPOINT ["/bin/bash"]
# Other FROMs
Edit 1
A bit more info about what I’m trying to achieve...
The buggy-container I’m trying to get working runs .Net Core as the base image. Its purpose is to run dotnet test and generate coverage reports, which can then be consumed in the host, which may either be a local dev machine or a build server (in this case, BitBucket pipelines).
... followed by docker run -dit --name buggy-container buggy-image
This command creates a new container, not based on anything in the compose yml file. Without a volume specification, it will only get an anonymous volume since you've defined the volume in the Dockerfile (I tend to recommend against defining a volume there). You can see the anonymous volumes with a docker volume ls command, they'll be the ones with a long unique id and no reference to what they belong to.
To define a host volume from docker run, you need the -v flag:
docker run -dit -v /Users/me/temp:/temp --name buggy-container buggy-image
From your now changed question, you have a new issue. Your container specifies a single command to run in the entrypoint:
ENTRYPOINT ["/bin/bash"]
When bash runs, it reads input from stdin. When that input ends, like when you run a container with no input attached, bash will exit. When the process your container runs exits, the container exits. From the details available, I can't tell you what that command should be, but a good starting point is to look at other images on docker hub that perform a similar task that you're trying to run, and look at the Dockerfile they use (many hub images point back to a GitHub repo with the full source).
How does mixing named volumes and bind mounts work? Using the following setup will the paths that are being bind mounted still be available inside the bind mount as they exist in the bind mount?
/var/www/html/wp-content/uploads
Using a separate container which I attach to the named volumes, seems to show that it is not the case as those paths are completely empty from the view of the separate container. Is there a way for this to work in a sense?
volumes:
- "wordpress:/var/www/html"
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
- "./wordpress/plugins:/var/www/html/wp-content/plugins"
- "./wordpress/themes:/var/www/html/wp-content/themes"
Host volumes: For a host volume, defined with a path in your docker compose file like:
volumes:
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
you will not receive any initialization of the host directory from the image contents. This is by design.
Named volumes: You can define a named volume that maps back to a local directory:
version: "2"
services:
your-service:
volumes:
- uploads:/var/www/html/wp-content/uploads
volumes:
uploads:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/to/wordpress/uploads
This will provide the initialization properties of a named volume. When your host directory is empty empty, on container creation docker will copy the contents of the image at /var/www/html/wp-content/uploads to /path/on/host/to/wordpress/uploads.
Nested mounts with Docker: If you have multiple nested volume mounts, docker will still copy from the image directory contents, not from a parent volume.
Here's an example of that initialization. Starting with the filesystem:
testvol/
data-image/
sub-dir/
from-image
data-submount/
Dockerfile
docker-compose.yml
The Dockerfile contains:
FROM busybox
COPY data-image/ /data
The docker-compose.yml contains:
version: "2"
services:
test:
build: .
image: test-vol
command: find /data
volumes:
- data:/data
- subdir:/data/sub-dir
volumes:
data:
subdir:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/test-vol/data-submount
And the named volume has been initialized:
$ docker run -it --rm -v testvol_data:/data busybox find /data
/data
/data/sub-dir
/data/sub-dir/from-named-vol
Running the test shows the copy comes from-image rather than from-named-vol:
$ docker-compose -f docker-compose.bind.yml up
...
Attaching to testvol_test_1
test_1 | /data
test_1 | /data/sub-dir
test_1 | /data/sub-dir/from-image
testvol_test_1 exited with code 0
And docker has copied this to the host filesystem:
$ ls -l data-submount/
total 0
-rw-r--r-- 1 root root 0 Jan 15 08:08 from-image
Nested mounts in Linux: From your question, there appears to be some confusion on how a mount itself works in Linux. Each volume mount runs in the container's mount namespace. This namespace gives the container its own view of a filesystem tree. When you mount a volume into that tree, you do not modify the contents from the parent filesystem, it simply covers up the contents of the parent at that location. All changes happen directly in that newly mounted directory, and if you were to unmount it, the parent directories will then be visible in their original state.
Therefore, if you mount two nested directories in one container, e.g. /data and /data/a, and then mount /data in a second container, you will not see /data/a from your first container in the second container, only the contents of /data will be there, including any folders that were mounted over top of.
I believe the answer is to configure bind propagation.
will report back.
Edit: Seems you can only configure bind propagation on bind mounted volumes and only on linux host system.
I've tried to get this to work for hours, but I've come to the conclusion that it just won't. My case was adding a specific plugin to a CMS as a volume for local development. I want to post this here because I haven't come across this workaround anywhere.
So the following would suffer from the overlapping volumes issue, causing the folders to be empty.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/var/www/html/wp-content/plugins
- ./wordpress/themes:/var/www/html/wp-content/themes
This is how you avoid that, by binding your themes and plugins to a different directory, not inside /var/www/html.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/tmp/plugins
- ./wordpress/themes:/tmp/themes
But now you have to get these files in the correct place, and have them still be in sync with the files on your host.
Simple version
Note: These examples assume you have a shell script as your entrypoint.
In your Docker entrypoint:
#!/bin/bash
ln -s /tmp/plugins/my-plugin /var/www/html/wp-content/plugins/my-plugin
ln -s /tmp/themes/my-theme /var/www/html/wp-content/themes/my-theme
This should work as long as your system/software resolves symlinks.
More modular solution
I only wrote this for plugins, but you could process themes the same way. This finds all plugins in the /tmp/plugins folder and symlinks them to /var/www/html/wp-content/plugins/<plugin>, without you having to write hard-coded folder/plugin names in your script.
#!/bin/bash
TMP_PLUGINS_DIR="/tmp/plugins"
CMS_PLUGINS_DIR="/var/www/html/wp-content/plugins"
# Loop through all paths in the /tmp/plugins folder.
for path in $TMP_PLUGINS_DIR/*/; do
# Ignore anything that's not a directory.
[ -d "${path}" ] || continue
# Get the plugin name from the path.
plugin="$(basename "${path}")"
# Symlink the plugin to the real plugins folder.
ln -sf $TMP_PLUGINS_DIR/$plugin CMS_PLUGINS_DIR/$plugin
# Anything else you might need to do for each plugin, like installing/enabling it in your CMS.
done
I realize other people have had similar questions but this uses v2 compose file format and I didn't find anything for that.
I want to make a very simple test app to play around with MemSQL but I can't get volumes to not get deleted after docker-compose down. If I've understood Docker Docs right, volumes shouldn't be deleted without explicitly telling it to. Everything seems to work with docker-compose up but after going down and then up again all data gets deleted from the database.
As recommended as a good practice, I'm using separate memsqldata service as a separate data layer.
Here's my docker-compose.yml:
version: '2'
services:
app:
build: .
links:
- memsql
memsql:
image: memsql/quickstart
volumes_from:
- memsqldata
ports:
- "3306:3306"
- "9000:9000"
memsqldata:
image: memsql/quickstart
command: /bin/true
volumes:
- memsqldatavolume:/data
volumes:
memsqldatavolume:
driver: local
I realize this is an old and solved thread where the OP was pointing to a directory in the container rather than the volume they had mounted, but wanted to clear up some of the misinformation I'm seeing.
docker-compose down does not remove volumes, you need to run docker-compose down -v if you also want to delete volumes. Here's the help text straight from docker-compose (note the "by default" list):
$ docker-compose down --help
Stops containers and removes containers, networks, volumes, and images
created by `up`.
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
...
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
...
$ docker-compose --version
docker-compose version 1.12.0, build b31ff33
Here's a sample yml with a named volume to test and a dummy command:
$ cat docker-compose.vol-named.yml
version: '2'
volumes:
data:
services:
test:
image: busybox
command: tail -f /dev/null
volumes:
- data:/data
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating volume "test_data" with default driver
Creating test_test_1
After starting the container, the volume is initialized empty since the image is empty at that location. I created a quick hello world in that location:
$ docker exec -it test_test_1 /bin/sh
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
/ # echo "hello volume" >/data/hello.txt
/ # ls -al /data
total 12
drwxr-xr-x 2 root root 4096 May 23 01:24 .
drwxr-xr-x 1 root root 4096 May 23 01:24 ..
-rw-r--r-- 1 root root 13 May 23 01:24 hello.txt
/ # cat /data/hello.txt
hello volume
/ # exit
The volume is visible outside of docker and is still there after a docker-compose down:
$ docker volume ls | grep test_
local test_data
$ docker-compose -f docker-compose.vol-named.yml down
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
$ docker volume ls | grep test_
local test_data
Recreating the container uses the old volume with the file still visible inside:
$ docker-compose -f docker-compose.vol-named.yml up -d
Creating network "test_default" with the default driver
Creating test_test_1
$ docker exec -it test_test_1 /bin/sh
/ # cat /data/hello.txt
hello volume
/ # exit
And running a docker-compose down -v finally removes both the container and the volume:
$ docker-compose -f docker-compose.vol-named.yml down -v
Stopping test_test_1 ... done
Removing test_test_1 ... done
Removing network test_default
Removing volume test_data
$ docker volume ls | grep test_
$
If you find your data is only being persisted if you use a stop/start rather than a down/up, then your data is being stored in the container (or possibly an anonymous volume) rather than your named volume, and the container is not persistent. Make sure the location for your data inside the container is correct to avoid this.
To debug where data is being stored in your container, I'd recommend using docker diff on a container. That will show all of the files created, modified, or deleted inside that container which will be lost when the container is deleted. E.g.:
$ docker run --name test-diff busybox \
/bin/sh -c "echo hello docker >/etc/hello.conf"
$ docker diff test-diff
C /etc
A /etc/hello.conf
You are using docker-compose down and if you look at the docs here
Stop containers and remove containers, networks, volumes, and images
created by up. Only containers and networks are removed by default.
You are right, it should not remove volumes (by default). It may be a bug or you may have changed the default configuration. But I think the right command for you is docker-compose stop. I will try to make some tests with simplier cases for down command.
This was traced back to a bad documentation from MemSQL. MemSQL data path in memsql/quickstart container is /memsql and not /var/lib/memsql like in a stand-alone installation (and in MemSQL docs), and definitely not /data like somebody told me.
Not sure if this helps or not. When you use docker-compose up -d the container is downloaded and images created. To stop the docker images, use docker-compose down, and the images will remain and can be restarted with docker-compose start
I was using the up/down commands and kept losing my data until I tried the stop/start and now data persists.
The simplest solution is to use docker-compose stop instead of docker-compose down. And then docker-compose start to restart.
According to the docs, down "stops containers and removes containers, networks, volumes, and images created by up."