Copy data from docker-compose auto-created volume to new external volume - docker

When you use a volume in docker-compose, docker will create a volume with the volume name appended to the directory name: [directory containing yml]_[volume name]
/var/containers/my-important-server-v0.23/docker-compose.yml
volumes:
server-db:
# Volume name: my-important-server-v0.23_server-db
Now when you move and rename the directory containing docker-compose.yml for reasons, it will create a new volume with the new directory prepended to the volume name.
In order to prevent this, and allow multiple compose files to use the same volume, we should have created a volume manually:
docker volume create server-db
volumes:
server-db:
external: true
How can we transfer the files from my-important-server-v0.23_server-db to server-db?
This is what I tried and seemed to make sense, but it doesn't work as expected.
docker volume create server-db
OLD=$(docker volume inspect my-important-server-v0.23_server-db | jq -r .[0].Mountpoint)
NEW=$(docker volume inspect server-db | jq -r .[0].Mountpoint)
sudo rsync -va $OLD/ $NEW/
Now here is the problem. The directory containing MariaDB files have different sizes. Apparently you can't simply copy the files like that.

Related

Shared volume between docker-compose files

I want to shared a volume between 2 docker-compose files. There are 2 interconnected apps and I need to create a symlink between them.
I tried using named volumes and the external feature.
On the first container, I can see the contents of the /var/www/s1 folder, but the /var/www/s2 folder is empty, while on the second container I can see the contents of the /var/www/s2 folder, but the /var/www/s1 folder seems empty. Since I can't see the contents of the folder created by the other app in /var/www, I can't do a symlink.
I made some dummy docker-compose files to try to expose the problem in an easier way.
In /var/www/s1 there should be a "magazine.txt" file, while in /var/www/s2 there should be a "paper.txt" file.
The first docker-compose file looks like this:
services:
nginx:
image: nginx
container_name: nginx
volumes:
- ../:/var/www/s1
- shared-s:/var/www
volumes:
shared-s:
name: shared-s
The second docker-compose file looks like this:
version: '3.8'
services:
php:
image: php
container_name: php
command: tail -F /var/www/s2/paper.txt
volumes:
- ../:/var/www/s2
- shared-s:/var/www
volumes:
shared-s:
external:
name: shared-s
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80b83a60a0e5 php "docker-php-entrypoi…" 2 seconds ago Up 1 second php
05addf1fc24e nginx "/docker-entrypoint.…" 9 seconds ago Up 8 seconds 80/tcp nginx
8c596d21cf7b portainer/portainer "/portainer" 2 hours ago Up About a minute 9000/tcp, 0.0.0.0:9001->9001/tcp portainer
$ docker exec -it 05addf1fc24e sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
docker magazine.txt
# cd ..
# cd s2
# ls
# exit
$ docker exec -it 80b83a60a0e5 sh
# cd /var/www
# ls
s1 s2
# cd s1
# ls
# cd ..
# cd s2
# ls
docker paper.txt
# exit
At a mechanical level, volumes and bind mounts don't "nest" in the way you're suggesting. The named volume shared-s will wind up containing only empty directories s1 and s2, but none of the content from either host directory.
What happens is something like this:
Docker starts (say) the nginx container first. It sorts the volumes: mounts on that container from shortest to longest.
Since the shared-s volume is empty, the content from the nginx base image in /var/www is copied to the volume; then the volume is mounted on /var/www in the container.
Docker creates the mount point /var/www/s1 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
Docker starts the php container and sorts its volumes:.
Since the shared-s volume is not empty, Docker just mounts it into the container, hiding any content that might have been in /var/www in the image.
Docker creates the mount point /var/www/s2 (in the volume), then bind-mounts the host directory there (without modifying the volume at all).
You'll notice a couple of problems with this sequence. Other mounted volumes' content never gets copied into the "shared" volume, which breaks the file sharing you're attempting here. Whichever container starts up first copies content from its image into the shared volume, and anything in that image in the other container gets lost. For that matter, if there is an update in the base image, Docker will ignore that in favor of the (old) content that's in the shared volume.
I'd suggest avoiding volumes here entirely. Build a separate image for each container, COPYing your application code into it. If you can use a static file server in the backend application, that will be much easier than trying to copy files from one container to the other. If that's not avoidable, you can use the COPY --from=image syntax that's normally used with multi-stage builds to also copy content from one built image to another.

How to mount Docker directory into host directory with docker-compose

Imagine I have a Docker container containing some static data.
Now for development purpose I want the content of the container directory /resources mounted to my local working directory ..
docker-compose.yml:
version: '3.2'
services:
resources:
image: <private_registry>/resources:latest
volumes:
- ./resources:/resources
When running docker-compose up the folder resources is created in my working directory, but it has no content, whereas the container has content in /resources/
When using a named volume and inspecting it, it works like expected.
Docker provides initialization of the the volume source to the content of your image in a specific scenario:
It must be a named volume, not a host volume (mapping a path into the container)
The volume source must be empty, once there is data inside the directory it will not be changed by docker
Only on creation of the container (while the container is running it won't reinitialize the folder)
The option to disable the copy has not been set (this is the "nocopy" option in the compose file).
You are currently stuck at the first requirement, but it is possible to map any folder from the host into the container using a named volume that performs a bind mount. Here are some examples of three different ways to do that:
# create the volume in advance
$ docker volume create --driver local \
--opt type=none \
--opt device=/home/user/test \
--opt o=bind \
test_vol
# create on the fly with --mount
$ docker run -it --rm \
--mount type=volume,dst=/container/path,volume-driver=local,volume-opt=type=none,volume-opt=o=bind,volume-opt=device=/home/user/test \
foo
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
...
Your example would look more like:
version: '3.2'
services:
resources:
image: <private_registry>/resources:latest
volumes:
- resources:/resources
volumes:
resources:
driver: local
driver_opts:
type: none
o: bind
device: /full/path/to/resources
Note that this directory must exist on the host in advance. The bind mount will fail without it, and unlike a host mount, docker will not create it for you.
There are a couple of things here. First, when you mount a host directory it 'shades' any existing content on the given path, effectively replacing it with the contents of the mount. So, your resources directory on your host is hiding any content in your container.
There is no easy solution to your problem. When I want to edit files in a container and on the host, I keep the files on the host and mount them in the container. If I want a copy of a container, I mount a host dir to a different dir in the container and arrange for the files to be copied.

Mixing named volumes and bind mounting in Docker?

How does mixing named volumes and bind mounts work? Using the following setup will the paths that are being bind mounted still be available inside the bind mount as they exist in the bind mount?
/var/www/html/wp-content/uploads
Using a separate container which I attach to the named volumes, seems to show that it is not the case as those paths are completely empty from the view of the separate container. Is there a way for this to work in a sense?
volumes:
- "wordpress:/var/www/html"
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
- "./wordpress/plugins:/var/www/html/wp-content/plugins"
- "./wordpress/themes:/var/www/html/wp-content/themes"
Host volumes: For a host volume, defined with a path in your docker compose file like:
volumes:
- "./wordpress/uploads:/var/www/html/wp-content/uploads"
you will not receive any initialization of the host directory from the image contents. This is by design.
Named volumes: You can define a named volume that maps back to a local directory:
version: "2"
services:
your-service:
volumes:
- uploads:/var/www/html/wp-content/uploads
volumes:
uploads:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/to/wordpress/uploads
This will provide the initialization properties of a named volume. When your host directory is empty empty, on container creation docker will copy the contents of the image at /var/www/html/wp-content/uploads to /path/on/host/to/wordpress/uploads.
Nested mounts with Docker: If you have multiple nested volume mounts, docker will still copy from the image directory contents, not from a parent volume.
Here's an example of that initialization. Starting with the filesystem:
testvol/
data-image/
sub-dir/
from-image
data-submount/
Dockerfile
docker-compose.yml
The Dockerfile contains:
FROM busybox
COPY data-image/ /data
The docker-compose.yml contains:
version: "2"
services:
test:
build: .
image: test-vol
command: find /data
volumes:
- data:/data
- subdir:/data/sub-dir
volumes:
data:
subdir:
driver: local
driver_opts:
type: none
o: bind
device: /path/on/host/test-vol/data-submount
And the named volume has been initialized:
$ docker run -it --rm -v testvol_data:/data busybox find /data
/data
/data/sub-dir
/data/sub-dir/from-named-vol
Running the test shows the copy comes from-image rather than from-named-vol:
$ docker-compose -f docker-compose.bind.yml up
...
Attaching to testvol_test_1
test_1 | /data
test_1 | /data/sub-dir
test_1 | /data/sub-dir/from-image
testvol_test_1 exited with code 0
And docker has copied this to the host filesystem:
$ ls -l data-submount/
total 0
-rw-r--r-- 1 root root 0 Jan 15 08:08 from-image
Nested mounts in Linux: From your question, there appears to be some confusion on how a mount itself works in Linux. Each volume mount runs in the container's mount namespace. This namespace gives the container its own view of a filesystem tree. When you mount a volume into that tree, you do not modify the contents from the parent filesystem, it simply covers up the contents of the parent at that location. All changes happen directly in that newly mounted directory, and if you were to unmount it, the parent directories will then be visible in their original state.
Therefore, if you mount two nested directories in one container, e.g. /data and /data/a, and then mount /data in a second container, you will not see /data/a from your first container in the second container, only the contents of /data will be there, including any folders that were mounted over top of.
I believe the answer is to configure bind propagation.
will report back.
Edit: Seems you can only configure bind propagation on bind mounted volumes and only on linux host system.
I've tried to get this to work for hours, but I've come to the conclusion that it just won't. My case was adding a specific plugin to a CMS as a volume for local development. I want to post this here because I haven't come across this workaround anywhere.
So the following would suffer from the overlapping volumes issue, causing the folders to be empty.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/var/www/html/wp-content/plugins
- ./wordpress/themes:/var/www/html/wp-content/themes
This is how you avoid that, by binding your themes and plugins to a different directory, not inside /var/www/html.
services:
your-service:
volumes:
- web-data:/var/www/html
- ./wordpress/plugins:/tmp/plugins
- ./wordpress/themes:/tmp/themes
But now you have to get these files in the correct place, and have them still be in sync with the files on your host.
Simple version
Note: These examples assume you have a shell script as your entrypoint.
In your Docker entrypoint:
#!/bin/bash
ln -s /tmp/plugins/my-plugin /var/www/html/wp-content/plugins/my-plugin
ln -s /tmp/themes/my-theme /var/www/html/wp-content/themes/my-theme
This should work as long as your system/software resolves symlinks.
More modular solution
I only wrote this for plugins, but you could process themes the same way. This finds all plugins in the /tmp/plugins folder and symlinks them to /var/www/html/wp-content/plugins/<plugin>, without you having to write hard-coded folder/plugin names in your script.
#!/bin/bash
TMP_PLUGINS_DIR="/tmp/plugins"
CMS_PLUGINS_DIR="/var/www/html/wp-content/plugins"
# Loop through all paths in the /tmp/plugins folder.
for path in $TMP_PLUGINS_DIR/*/; do
# Ignore anything that's not a directory.
[ -d "${path}" ] || continue
# Get the plugin name from the path.
plugin="$(basename "${path}")"
# Symlink the plugin to the real plugins folder.
ln -sf $TMP_PLUGINS_DIR/$plugin CMS_PLUGINS_DIR/$plugin
# Anything else you might need to do for each plugin, like installing/enabling it in your CMS.
done

Same volume for multiple containers

Currently I am working on a task that needs to create 4-5 different docker containers. Now the catch is that all these containers should be using the same volume mounted inside the container.
I am building the images using individual Dockerfiles and then running the containers using these images. The preferable way is to use VOLUME in Dockerfiles. But I am not sure how exactly to use it.
Here's a code snippet of a Dockerfile -
FROM [imagename]
WORKDIR \app
VOLUME \app\[container's folder] #this folder is separate for each container. eg. VOLUME \app\sql
COPY shell.sh \app\[container's folder]\shell.sh
.
.
rest of code
After running the containers and bashing into it, for a particular container the files are getting stored inside the particular path; eg. as in above snippet, shell.sh is present inside path \app\[container's name]
But when I check in another container, I am able to see that container's files in the particular path, but cant see the first container's files.
This is how I want the structure -
app
|- sql
| |- shell-script files
|- tsdb
| |- shell-script files
|- kernel
| |- shell-script files
.
.
OR
/app/sql/shellsql.sh
/app/tsdb/shelltsdb.sh
/app/kernel/shellkernel.sh
Assumming you have the following volumes for the containers
VOLUME /app/container1
VOLUME /app/container2
VOLUME /app/container3
The way share the volumes is as such. Create 3 volumes from command line:
docker volume create vol1
docker volume create vol2
docker volume create vol3
When running each container, mount the all the volumes
docker run -v vol1:/app/container1 -v vol2:/app/container2 -v vol3:/app/container3 <image1>
docker run -v vol1:/app/container1 -v vol2:/app/container2 -v vol3:/app/container3 <image2>
...
Why are you not using docker-compose?
docker-compose is the better way to run all containers in one command (you can run specific container also. In docker-compose, you can create configuration easily (mount volume, link containers to each other, network ... etc).
Please have a look at sample docker-compose file
container1:
build: image_name
volumes_from:
- data_volume
container2:
build: vra_manager2
volumes_from:
- data_volume
With above example, both containers are used the same data_volume. You don't have to write big command for each container.
For example:
volumes:
- /vol1:/app/container1
volumes_from:
- data_volume

Is there a way to tag or name volume instances using docker compose?

When using docker compose, I find a lot of volume instances:
› docker volume ls
DRIVER VOLUME NAME
local 4a34b9a352a459171137aac4c046a83f61e6e325b1df4b67dc2ddda8439a6427
local 6ce3e52ea363441b2c9d4b04c26b283d8b4cf631a137987da88db812a9a2d223
local a7af289b29c833510f2201647266001e4746e206128dc63313fe894821fa044d
local fb09475f75fe943671a4e73d76c09c27a4f592b8ddf62224fc4b20afa0095809
I'd like to tag or name them, then reuse them if possible rather recreating them each time.
Is that possible?
Those are anonymous container volumes that happen when you define a volume without a name or bind it to a host folder. This may be with the VOLUME definition in your Dockerfile, a docker run -v /dir ... rather than name:/dir, or a volumes entry in your docker-compose.yml with only the directory. An example of a compose file that does a named mount is:
version: '2'
volumes:
my-vol:
driver: local
services:
my-container:
image: my-image
volumes:
- my-vol:/container/path
Once the anonymous volume has been created, there's no easy way to rename it. Easiest solution is to mount the anonymous volume along with the your target named volume and do a copy, e.g.:
docker run -v 123456789:/source -v my-vol:/target --rm \
busybox cp -av /source/. /target/
Where 123456789 is the long name of your anonymous volume.

Resources