How to read docker command? - docker

I would like someone to assist me in reading below docker run command
docker run --rm \
--volumes-from myredis \
-v $PWD/backup:/backup \
debian \
cp /data/dump.rdb /backup/
I know it dumps redis, and attaching volume from container myredis into cwd backup. As for the rest of the command I am having trouble interpreting it.
Thanks.

this command is to create a redis's backup you are coping the dump.rdb into the /backup dir on your host.
--rm means remove the container after run, usually it's a good way to clean your env because you can not reuse this container when it finish its work.
debian is the name of the image that you are using.
"cp /data/dump.rdb /backup/" is the command that you are doing inside your container

Related

Copy docker volumes

I want to update some container. For testing, I want to create a copy of the corresponding volume. Set up a new container for this new volume.
Is this as easy as doing cp -r volumeOld volumeNew?
Or do I have to pay attention to something?
To clone docker volumes, you can transfer your files from one volume to another one. For that you have to manually create a new volume and then spin up a container to copy the contents.
Someone has already made a script for that, which you might use: https://github.com/gdiepen/docker-convenience-scripts/blob/master/docker_clone_volume.sh
If not, use the following commands (taken from the script):
# Supplement "old_volume" and "new_volume" for your real volume names
docker volume create --name new_volume
docker container run --rm -it \
-v old_volume:/from \
-v new_volume:/to \
alpine ash -c "cd /from ; cp -av . /to"
On Linux it can be as easy as copying a directory. Docker keeps volumes in /var/lib/docker/volumes/<volume_name>, so you can simply copy contents of the source volume into a directory with another name:
# -p to preserve permissions
sudo cp -rp /var/lib/docker/volumes/source_volume /var/lib/docker/volumes/target_volume
Should you want to copy volumes managed by docker-compose, you'll also need to copy the specific labels when creating the new volume.
Else docker-compose will throw something like Volume already exists but was not created by Docker Compose.
Extending on the solution by MauriceNino, these lines worked for me:
# Supplement "proj1_vol1" and "proj2_vol2" for your real volume names
docker volume inspect proj1_vol1 # Look at labels of old volume
docker volume create \
--label com.docker.compose.project=proj2 \
--label com.docker.compose.version=2.2.1 \
--label com.docker.compose.volume=vol2 \
proj2_vol2
docker container run --rm -it \
-v proj1_vol1:/from \
-v proj2_vol2:/to \
alpine ash -c "cd /from ; cp -av . /to"
Btw, this also seems to be the only way to rename Docker volumes.
In my work I use this script to:
clone the container
clone all its volumes and copy contents from the old volumes to the new ones
run the new container (with an arbitrary new image)
reattach the new volumes to the new container at the same destinations as the old ones
However, the script makes some assumptions about the naming of the volumes, so please read the README instructions before applying it.

Where is the file I mounted at run time to Docker?

I mounted my secret file secret.json at runtime to a local docker, and while it works, I don't seems to find this volume anywhere.
My docker file looks like this and has no reference to secret:
RUN mkdir ./app
ADD src/python ./app/src/python
ENTRYPOINT ["python"]
Then I ran
docker build -t {MY_IMAGE_NAME} .
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
{MY_IMAGE_NAME} ./app/src/python/runner.py
This runs successfully locally but when I do
docker run --entrypoint "ls" {MY_IMAGE_NAME}
I don't see the volume secrets.
Also, if I run
docker volume ls
it doesn't have anything that looks like secrets.
Without environment variable MY_CREDENTIALS the script won't run. So I am sure the secret file is mounted somewhere, but can't figure out where it is. Any idea?
You are actually creating two separate containers with the commands you are running. The first docker run command creates a container from the image you have built with the volume mounted and then the second command creates a new container from the same image but without any volumes (as you don't define any in your command)
I'd suggest you give your container a name like so
docker run -t -v $PATH_TO_SECRET_FILE/:/secrets/secret.json \
-e MY_CREDENTIALS=/secrets/secret.json \
--name my_container {MY_IMAGE_NAME} ./app/src/python/runner.py
and then run exec on that container
docker exec -it my_container sh

How can I access the /etc of a pulled servicemix image

I need to install a custom bundle in a dockerized servicemix image. To do so, I need to paste some files in the /etc directory of the servicemix image.
Could anyone help me doing this?
I've tried using the Dockerfile as follows:
But it simply doesn't work. I've looked through the documentation of the image, and the author tells me to use the command: docker run --volumes-from servicemix-data -it ubuntu bash and inspect the /servicemix, but it's empty.
Dockerfile:
FROM dskow/apache-servicemix
WORKDIR .
COPY ./docs /apache-servicemix/etc
...
Command suggested by the author:
docker run --volumes-from servicemix-data -it ubuntu bash
I was unfamiliar with this approach but, having looked at the source (link), I think this is what you want to do:
Create a container called servicemix-data that will become your volume:
docker run --name servicemix-data -v /servicemix busybox
Confirm this worked:
docker container ls --format="{{.ID}}\t{{.Names}}" --all
42b3bc4dbedf servicemix-data
...
Then you want to copy the files into this container:
docker cp ./docs servicemix-data:/etc
Finally, run servicemix using this container (with your files) as the source for its data:
docker run \
--detach \
--name=servicemix \
--volumes-from=servicemix-data \
dskow/apache-servicemix
HTH!
Changes in the container will be lost until it is committed back to the image.
You can use this docker file https://hub.docker.com/r/mkroli/servicemix/dockerfile and your copy statement just before the ENTRYPOINT.
COPY ./docs /opt/apache-servicemix/etc

How can I add a volume to an existing Docker container?

I have a Docker container that I've created simply by installing Docker on Ubuntu and doing:
sudo docker run -i -t ubuntu /bin/bash
I immediately started installing Java and some other tools, spent some time with it, and stopped the container by
exit
Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume.
All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume.
Is this indeed the only way to add a volume to an existing container?
You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.
Example:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton
$ docker commit 5a8f89adeead newimagename
$ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash
If it's all OK, stop your old container, and use this new one.
You can also commit a container using its name, for example:
docker commit agitated_newton newimagename
That's it :)
We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:
Copy files/folders between a container and the local filesystem:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH
For reference see:
https://docs.docker.com/engine/reference/commandline/cp/
I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:
Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).
Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings):
"MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}}
or the same (formatted):
"MountPoints": {
"/mnt": {
"Source": "/home/<user-name>",
"Destination": "/mnt",
"RW": true,
"Name": "",
"Driver": "",
"Type": "bind",
"Propagation": "rprivate",
"Spec": {
"Type": "bind",
"Source": "/home/<user-name>",
"Target": "/mnt"
},
"SkipMountpointCreation": false
}
}
Restart the docker service: service docker restart
This works for me with Ubuntu 18.04.1 and Docker 18.09.0
Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.
As he also points out
This will not work on filesystems which are not based on block devices.
It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).
Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that
YMMV
Unfortunately the switch option to mount a volume is only found in the run command.
docker run --help
-v, --volume list Bind mount a volume (default [])
There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.
Export your container
docker container export -o ./myimage.docker mycontainer
Import as an image
docker import ./myimage.docker myimage
Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash
A note for using Docker Windows containers after I had to look for this problem for a long time!
Condiditions:
Windows 10
Docker Desktop (latest version)
using Docker Windows Container for image microsoft/mssql-server-windows-developer
Problem:
I wanted to mount a host dictionary into my windows container.
Solution as partially discripted here:
create docker container
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer
go to command shell in container
docker exec -it <CONTAINERID> cmd.exe
create DIR
mkdir DirForMount
stop container
docker container stop <CONTAINERID>
commit container
docker commit <CONTAINERID> <NEWIMAGENAME>
delete old container
docker container rm <CONTAINERID>
create new container with new image and volume mounting
docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME>
After this i solved this problem on docker windows containers.
My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.
docker volume create ubuntu-volume
docker stop <container-name>
sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash
You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.
eg (solr image).
(initial script)
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
file with the second volume
#!/bin/sh
docker pull solr:8.5
docker stop my_solr
docker rm solr:8.5
docker create \
--name my_solr \
-v "/XXXX/docker/solr/solrdata":/var/solr \
-v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \
-p 8983:8983 \
--restart unless-stopped \
--user 1000:1000 \
-e SOLR_HEAP=1g \
--log-opt max-size=10m \
--log-opt max-file=3 \
solr:8.5
docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib
docker start my_solr
Use symlink to the already mounted drive:
ln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker
The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH
SRC_PATH is on container
DEST_PATH is on localhost
Then do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d
Add volume by following in docker-compose.yml
volumes:
- DEST_PATH:SRC_PATH

How to port data-only volumes from one host to another?

As described in the Docker documentation on Working with Volumes there is the concept of so-called data-only containers, which provide a volume that can be mounted into multiple other containers, no matter whether the data-only container is actually running or not.
Basically, this sounds awesome. But there is one thing I do not understand.
These volumes (which do not explicitly map to a folder on the host for portability reasons, as the documentation states) are created and managed by Docker in some internal folder on the host (/var/docker/volumes/…).
Supposed I use such a volume, and then I need to migrate it from one host to another - how do I port the volume? AFAICS it has a unique ID - can I just go and copy the volume and its according data-only container to a new host? How do I find out which files to copy? Or is there some support built-in to Docker that I did not discover yet?
The official answer is available in the section "Back up, restore, or migrate data volumes":
BACKUP:
sudo docker run --rm --volumes-from DATA -v $(pwd):/backup busybox tar cvf /backup/backup.tar /data
--rm: remove the container when it exits
--volumes-from DATA: attach to the volumes shared by the DATA container
-v $(pwd):/backup: bind mount the current directory into the container; to write the tar file to
busybox: a small simpler image - good for quick maintenance
tar cvf /backup/backup.tar /data: creates an uncompressed tar file of all the files in the /data directory
RESTORE:
# create a new data container
$ sudo docker create -v /data --name DATA2 busybox true
# untar the backup files into the new container᾿s data volume
$ sudo docker run --rm --volumes-from DATA2 -v $(pwd):/backup busybox tar xvf /backup/backup.tar
data/
data/sven.txt
# compare to the original container
$ sudo docker run --rm --volumes-from DATA -v `pwd`:/backup busybox ls /data
sven.txt
Extending the official answer from Docker docs and the top answer here, you can have following functions in your .bashrc or .zshrc:
# backup files from a docker volume into /tmp/backup.tar.gz
function docker-volume-backup-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -czvf /backup/backup.tar.gz "${#:2}"
}
# restore files from /tmp/backup.tar.gz into a docker volume
function docker-volume-restore-compressed() {
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie tar -xzvf /backup/backup.tar.gz "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" debian:jessie ls -lh "${#:2}"
}
# backup files from a docker volume into /tmp/backup.tar
function docker-volume-backup() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -cvf /backup/backup.tar "${#:2}"
}
# restore files from /tmp/backup.tar into a docker volume
function docker-volume-restore() {
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox tar -xvf /backup/backup.tar "${#:2}"
echo "Double checking files..."
docker run --rm -v /tmp:/backup --volumes-from "$1" busybox ls -lh "${#:2}"
}
Note that the backup is saved into /tmp, so you can move the backup file saved there between docker hosts.
There is also two pairs of backup/restore aliases. One using compression and debian:jessie and other with no compression but with busybox. Favor using compression if the files to backup are big.
You can export the volume to tar and transfer to another machine. And import the data with tar on the second machine. This does not rely on implementation details of the volumes.
# you can list shared directories of the data container
docker inspect <data container> | grep "/vfs/dir/"
# you can export data container directory to tgz
docker run --cidfile=id.tmp --volumes-from <data container> ubuntu tar -cO <volume path> | gzip -c > volume.tgz
# clean up: remove exited container used for export and temporary file
docker rm `cat id.tmp` && rm -f id.tmp
I'll add another recent tool here from IBM which is actually made for the volume migration from one container host to another. This is a currently on-going project. So, you may find a different version with additional features in future.
Cargo was developed to migrate containers from one host to another host along with their data with minimal downtime. Cargo uses data federation capabilities of union filesystem to create a unified view of data (mainly the root file system) across the source and target hosts. This allows Cargo to start up a container almost immediately (within milliseconds) on the target host as the data from source root file system gets copied to target hosts either on-demand (using a copy-on-write (COW) partition) or lazily in the background (using rsync).
Important points are:
- a centralized server handles the migration process
The link to the project is given here:
https://github.com/nadgowdas/cargo
In case your machines are in different VPCs or you want to copy from/to local machine (like in my case) you can use dvsync I created. It's basically ngrok combined with rsync over SSH packaged into two small (both ~25MB) images. First, you start the dvsync-server on a machine you want to copy data from (You'll need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard):
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_VOLUME,target=/data,readonly \
quay.io/suda/dvsync-server
Then you can start the dvsync-client on the machine you want to copy the files to, passing the DVSYNC_TOKEN shown by the server:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
Once the copying will be done, the client will exit. This works with Docker CLI, Compose, Swarm and Kubernetes as well.
Here's a one-liner in case it can be established an SSH connection between the machines:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c "cd /from ; tar -cf - . " | ssh <TARGET_HOST> 'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - " '
Credits go to Guido Diepen's post.
Just wrote docker-volume-snapshot command for similar usecase. This command is based on tommasop's answer.
With the command,
Create snapshot
docker-volume-snapshot create <volume-name> snapshot.tar
Move snapshot.tar to another host
Restore snapshot
docker-volume-snapshot restore snapshot.tar <volume-name>
Adding an answer here as I don't have reputation to comment. While all the above answers have helped me, I imagine there may be others like me who are also looking to copy the contents of a backup.tar file into a named docker volume on the collaborator's machine. I don't see this discussed specifically above or in docker volumes documentation.
Why would you do want to do copy the backup.tar file into a named docker volume?
This could be helpful in a scenario where a named docker volume has been specified inside an existing docker-compose.yml file to be used by some of the containers.
Copying contents of backup.tar into a named docker volume
On host machine, follow the steps in accepted answer or docker volumes documentation to create a backup.tar file and push it to some repository.
Pull backup.tar into collaborator's machine from repository.
On collaborator's machine, create a temporary container and a named docker volume.
docker run -v named_docker_volume:/dbdata --name temp_db_container ubuntu /bin/bash
--name temp_db_container : Create a container called
temp_db_container
ubuntu /bin/bash : Use a ubuntu image to
build temp_db_container with starting command of /bin/bash
-v named_docker_volume:/dbdata : Mount the /dbdata folder of
temp_db_container into a docker volume called
named_docker_volume. We use this specifically named volume
named_docker_volume to match with volume name specified in our
docker-compose.yml file.
On collaborator's machine, Copy over the contents of backup.tar into the named docker volume.
docker run --rm --volumes-from temp_db_container -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1"
--volumes-from temp_db_container : temp_db_container container's /dbdata folder was mapped to named_docker_volume volume in previous step. So any file that gets stored in /dbdata folder will immediately get copied over to named_docker_volume docker volume.
-v $(pwd):/backup : map the local machine's present working directory to the /backup folder located inside temp_db_container
ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar --strip 1" : Untar the backup.tar file and store the untarred contents inside /dbdata folder.
On collaborator's machine, clear the temporary container temp_db_container
docker rm temp_db_container
Adapted from the accepted answer, but gives more flexibility in that you can use it in bash pipeline:
#!/bin/bash
if [ $# != 2 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup
exit 1
fi
if [ -t 1 ]; then
echo The output of the cmd is binary data "(tar)", \
and it should be redirected instead of printed to terminal
exit 1
fi
volume="$1"
path="$2"
exec docker run --rm --mount type=volume,src="$volume",dst=/mnt/volume/ alpine tar cf - . -C /mnt/volume/"$path"
If you want to backup the volume periodically and incrementally, then you can use the following script:
#!/bin/bash
if [ $# != 3 ]; then
echo Usage "$0": volume /path/of/the/dir/in/volume/to/backup /path/to/put/backup
exit 1
fi
volume="$1"
volume_path="$2"
path="$3"
if [[ "$path" =~ ^.*/$ ]]; then
echo "The 3rd argument shouldn't end in '/', otherwise rsync would not behave as expected"
exit 1
fi
container_name="docker-backup-rsync-service-$RANDOM"
docker run --rm --name="$container_name" -d -p 8738:873 \
--mount type=volume,src="$volume",dst=/mnt/volume/ \
nobodyxu/rsyncd
echo -e '\nStarting syncing...'
rsync --info=progress2,stats,symsafe -aHAX --delete \
"rsync://localhost:8738/root/mnt/volume/$volume_path/" "$path"
exit_status=$?
echo -e '\nStopping the rsyncd docker...'
docker stop -t 1 "$container_name"
exit $exit_status
It utilizes rsync's server and client functionality to directly sync the dir between volume and your host dir.
I was dissatisfied with the answer using tar. I decided to take matters into my own hands. As I am going to be syncing the data often, and it's going to be big, I wanted specifically to use rsync. Using tar to send all the data every time would be just a waste of time and transfer.
After days spent on how to solve the problem of communicating between two remote docker containers, I finally got a solution using socat.
run two docker containers - one on the source the other on destination, each with one volume mounted - the source volume and destination volume.
run rsync --deamon on one of the containers that will stream/load data from the volume
run docker exec source_container socat - TCP:localhost and run docker exec desintation_container socat TCP-LISTEN:rsync - and connect stdin and stdout of both these together. So one socat connects to rsync --daemon and redirects data from/to stdout/stdin, the other socat listens on :rsync port (port 873) and redirect to/from stdin/stdout. Then connect them together, so basically we pipe data from one container port to the other.
then run on the other of volumes rsync client that would connect to localhost:rsync, effective connecting via "socat pipe" to the rsync --daemon.
Basically, it works like this:
log "Running both destination and source containers"
src_did=$(
env DOCKER_HOST=$src_docker_host docker run --rm -d -i -v \
"$src_volume":/data:ro -w /data alpine_with_rsync_and_socat\
sleep infinity
)
dst_did=$(
env DOCKER_HOST=$dst_docker_host docker run --rm -d -i -v \
"$dst_volume":/data:rw -w /data alpine_with_rsync_and_socat \
sleep infinity
)
log "Running rsyncd on destination container"
env DOCKER_HOST=$dst_docker_host docker exec "$dst_did" sh -c "
cat <<EOF > /etc/rsyncd.conf &&
uid = root
gid = root
use chroot = no
max connections = 1
numeric ids = yes
reverse lookup = no
[data]
path = /data/
read only = no
EOF
rsync --daemon
"
log "Setup rsync socat forwarding between containers"
{
coproc { env DOCKER_HOST=$dst_docker_host docker exec -i "$dst_did" \
socat -T 10 - TCP:localhost:rsync,forever; }
env DOCKER_HOST=$src_docker_host docker exec -i "$src_did" \
socat -T 10 TCP-LISTEN:rsync,forever,reuseaddr - <&"${COPROC[0]}" >&"${COPROC[1]}"
} &
log "Running rsync on source that will connect to destination"
env DOCKER_HOST=$src_docker docker exec -e RSYNC_PASSWORD="$g_password" -w /data "$src_did" \
rsync -aivxsAHSX --progress /data/ rsync://root#localhost/data
Another the really nice thing about that approach, is that you can copy data between two remote hosts, without ever storing the data locally. I also share the script ,docker-rsync-volumes that I've written around this idea. With that script, copying volume from two remote hosts is just simple ,docker-rsync-volumes --delete -f ssh://user#productionserver grafana_data -t ssh://user#backupserver grafana_data_backup.
This ssh copies your volume from one server to another.
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
If you want to copy more than one volume that matches a filter.
REMOTEHOST=root#123.123.123.123
Volumes=($(docker volume ls --filter "name=mailcow*" --format="{{.Name}}"))
for VOLUME in ${Volumes[#]}; do
docker run --rm -v $VOLUME:/$VOLUME alpine tar -czv --to-stdout -C /$VOLUME . | ssh $REMOTEHOST "docker run --rm -i -v $VOLUME:/$VOLUME alpine tar xzf - -C /$VOLUME"
done

Resources