How to map volume paths using Docker's --volumes-from? - docker

I'm new to Docker and am excited about using the --volumes-from feature but there's something I'm not understanding.
If I want to use --volumes-from with two data-only containers, each of which exports volumes named /srv, how to I prevent the volume paths from colliding? I can map volume names when creating a bind mount using [host-dir]:[container-dir]; how do I do that with --volumes-from?
So what I want would look something like this:
docker run --name=DATA1 --volume=/srv busybox true
docker run --name=DATA2 --volume=/srv busybox true
docker run -t -i -rm --volumes-from DATA1:/srv1 --volumes-from DATA2:/srv2 ubuntu bash

It can be done, but it is not supported at this moment in docker commandline interface.
How-to
Find the volumes directories:
docker inspect DATA1 | grep "vfs/dir"
# output something like:
# "/srv": "/var/lib/docker/vfs/dir/<long vol id>"
So, you can automate this, and mount these directories at mount points of your choice:
# load directories in variables:
SRV1=$(docker inspect DATA1 | grep "vfs/dir" | awk '/"(.*)"/ { gsub(/"/,"",$2); print $2 }')
SRV2=$(docker inspect DATA2 | grep "vfs/dir" | awk '/"(.*)"/ { gsub(/"/,"",$2); print $2 }')
now, mount these volumes by real directories instead of the --volumes-from:
docker run -t -i -v $SRV1:/srv1 -v $SRV2:/srv2 ubuntu bash
IMO, the functionality is identical, because this is the same thing that is done when using --volumes-from.

For completeness...
#create data containers
docker run --name=d1 -v /svr1 busybox sh -c 'touch /svr1/some_data'
docker run --name=d2 -v /svr2 busybox sh -c 'touch /svr2/some_data'
# all together...
docker run --rm --volumes-from=d1 --volumes-from=d2 busybox sh -c 'find -name some_data'
# prints:
# ./svr2/some_data
# ./svr1/some_data
# cleanup...
docker rm -f d1 d2
The "--volumes-from=container" just map over the filesystem, like mount --bind
If you want to change the path, Jiri's answer is (currently) the only way.
But if you are in a limited environment you might want to use dockers built in inspect parsing capabilities:
# create data containers
docker run --name=DATA1 --volume=/srv busybox sh -c 'touch /srv/some_data-1'
docker run --name=DATA2 --volume=/srv busybox sh -c 'touch /srv/some_data-2'
# run with volumes and show the data
docker run \
-v $(docker inspect -f '{{ index .Volumes "/srv" }}' DATA1):/srv1 \
-v $(docker inspect -f '{{ index .Volumes "/srv" }}' DATA2):/srv2 \
--rm busybox sh -c 'find -name some_data-*'
# prints:
# ./srv2/some_data-2
# ./srv1/some_data-1
# ditch data containers...
docker rm -f DATA1 DATA2
this probably even works with the old bash version that comes with boot2docker.

Related

Rename Docker Volume on Docker Desktop

Is it possible to rename a docker volume? I want to change the volume names of the existing container. I see that config.json and hostconfig.json has the volume details in it.
docker run -di -p 8083:443 -v app_main_db_test_1:/var/lib/pgsql/data -v app_main_conf_test_1:/var/www/ ubuntu
I want to change app_main_db_test_1 to app_main_db and app_main_conf_test_1 to app_main_conf
as far as i know there are no ways of renaming docker volume so far. There is an open github issue, which indicates there is no solution to the topic yet.
But there are a few useful ways how to do so. Since you are saying you use Docker Desktop you could check this comment:
docker volume create --name <new_volume>
docker run --rm -it -v <old_volume>:/from -v <new_volume>:/to alpine ash -c "cd /from ; cp -av . /to"
docker volume rm <old_volume>
Which should do exactly what you are planning to do.
Figured it out. Thanks to this github post.
# Create new Volume for DB and copy files from old volume
docker volume create --name app_main_db
docker run --rm -it -v app_main_db_test_1:/from -v app_main_db:/to alpine ash -c "cd /from ; cp -av . /to"
# Create new Volume for conf and copy files from old volume
docker volume create --name app_main_conf
docker run --rm -it -v app_main_conf_test_1:/from -v app_main_conf:/to alpine ash -c "cd /from ; cp -av . /to"
# Start the container using new volumes
docker run -di -p 8083:443 -v app_main_db:/var/lib/pgsql/data -v app_main_conf:/var/www/ ubuntu
# Delete old volumes
docker volume rm app_main_db_test_1
docker volume rm app_main_conf_test_1

How to migrate volume data from docker-for-mac to colima

How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)

Cannot share data between volumes on different containers on Jenkins

I am new at docker and I've been struggling with the following:
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub -v DataVolume5:/src --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I am trying to get data from ui-tests-runner${buildProperties} container from /src into DataVolume5
I am getting 0 files when I list the contents of datavolume5
However, if I try to do the same thing with chrome-node${buildProperties} /home I can see /seluser when I list the contents of datavolume5 which is expected.
sh "docker network create grid${buildProperties}"
sh "docker run -d --net grid${buildProperties} --health-cmd=\"curl -sSL http://selenium-hub${buildProperties}:4444/wd/hub/status | jq -r '.status' | grep 0\" --health-interval=5s --health-timeout=1s --health-retries=10 --name selenium-hub${buildProperties} selenium/hub:3.141.59-radium"
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=selenium-hub -v /dev/shm:/dev/shm -v DataVolume5:/seluser --name chrome-node${buildProperties} selenium/node-chrome:3.141.59-20200525"
sh "docker build -t ui-tests-runner ."
sh "docker run -d --link selenium-hub${buildProperties}:selenium-hub --net grid${buildProperties} -e HUB_HOST=http://selenium-hub:4444/wd/hub --name ui-tests-runner${buildProperties} ui-tests-runner"
sh "docker ps"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
I tried numerous things that I found online, I checked permissions and that seems fine. The only thing I can think of what's different is that the ui-tests-runner${buildProperties} container is hosting a repository. I don't know what else to try. I have been struggling for a few days now.
This piece of code was taken from the pipeline bit in the Jenkinsfile
You have a race condition between these two commands:
sh "docker run -d ... -v DataVolume5:/src ... ui-tests-runner"
sh "docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5"
The first command, with the -d option, will not stop. It will run the container in the background. The second command then runs while your ui-tests-runner container is starting up, and shows the folder before your tests have run.
Named volumes are also populated when first used with the image contents at that location. So when you use a different path that has contents inside your image at that location, you'll get files in the volume.
Once that initialization step is done and the volume is no longer empty, you'll only see files that are written to the volume by the process inside a container. You won't get changes from the image filesystem as images are redeployed since that path in the container is replaced by the contents of the persistent volume.
I presume you're creating DataVolume5 as a named volume, using
docker volume create.
In which case you don't need to specify the absolute path, but docker volume inspect DataVolume5 will give you the path.
Try using a specific host directory as the shared volume instead.
docker run -d -v myVolume:/src ui-tests-runner
first check DataVolume5 containes something after running ui-tests-runner command
in the command docker run --rm -v DataVolume5:/datavolume5 ubuntu ls -l datavolume5
give absolute path of DataVolume5
Eg. docker run --rm -v /abs-path-to-directory/DataVolume5:/datavolume5 ubuntu ls -l datavolume5

execute a command within docker swarm service

Initialize swarm mode:
root#ip-172-31-44-207:/home/ubuntu# docker swarm init --advertise-addr 172.31.44.207
Swarm initialized: current node (4mj61oxcc8ulbwd7zedxnz6ce) is now a manager.
To add a worker to this swarm, run the following command:
Join the second node:
docker swarm join \
--token SWMTKN-1-4xvddif3wf8tpzcg23tem3zlncth8460srbm7qtyx5qk3ton55-6g05kuek1jhs170d8fub83vs5 \
172.31.44.207:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# start 2 services
docker service create continuumio/miniconda3
docker service create --name redis redis:3.0.6
root#ip-172-31-44-207:/home/ubuntu# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2yc1xjmita67 miniconda3 0/1 continuumio/miniconda3
c3ptcf2q9zv2 redis 1/1 redis:3.0.6
As shown above, redis has it's replica while miniconda does not seem to be replicated.
I do usually log-in to miniconda container to type these commands:
/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser
The problem is that docker exec -it XXX bash command does not work with swarm mode.
You can execute commands by filtering container name without needing to pass the entire swarm container hash, just by the service name. Like this:
docker exec $(docker ps -q -f name=servicename) ls
There is one liner for accessing corresponding instance of the service for localhost:
docker exec -ti stack_myservice.1.$(docker service ps -f 'name=stack_myservice.1' stack_myservice -q --no-trunc | head -n1) /bin/bash
It is tested on PowerShell, but bash should be the same. The oneliner accesses the first instance, but replace '1' with the number of the instance you want to access in two places to get other one.
More complex example is for distributed case:
#! /bin/bash
set -e
exec_task=$1
exec_instance=$2
strindex() {
x="${1%%$2*}"
[[ "$x" = "$1" ]] && echo -1 || echo "${#x}"
}
parse_node() {
read title
id_start=0
name_start=`strindex "$title" NAME`
image_start=`strindex "$title" IMAGE`
node_start=`strindex "$title" NODE`
dstate_start=`strindex "$title" DESIRED`
id_length=name_start
name_length=`expr $image_start - $name_start`
node_length=`expr $dstate_start - $node_start`
read line
id=${line:$id_start:$id_length}
name=${line:$name_start:$name_length}
name=$(echo $name)
node=${line:$node_start:$node_length}
echo $name.$id
echo $node
}
if true; then
read fn
docker_fullname=$fn
read nn
docker_node=$nn
fi < <( docker service ps -f name=$exec_task.$exec_instance --no-trunc -f desired-state=running $exec_task | parse_node )
echo "Executing in $docker_node $docker_fullname"
eval `docker-machine env $docker_node`
docker exec -ti $docker_fullname /bin/bash
This script could be used later as:
swarm_bash stack_task 1
It just execute bash on required node.
EDIT 2017-10-06:
Nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
(in alpine container) $ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
You are right, you cannot run docker exec on docker swarm mode service. But you can still find out, which node is running the container and then run exec directly on the container. E.g.
docker service ps miniconda3 # find out, which node is running the container
eval `docker-machine env <node name here>`
docker ps # find out the container id of miniconda
docker exec -it <container id here> sh
In your case you first have to find out, why service cannot get the miniconda container up. Maybe running docker service ps miniconda3 shows some helpful error messages..?
Using the Docker API
Right now, Docker does not provide an API like docker service exec or docker stack exec for this. But regarding this, there already exists two issues dealing with this functionality:
github.com - moby/moby - Docker service exec
github.com - docker/swarmkit - Support for executing into a task
(Regarding the first issue, for me, it was not directly clear that this issue deals with exactly this kind of functionality. But Exec for Swarm was closed and marked as duplicate of the Docker service exec issue.)
Using Docker daemon over HTTP
As mentioned by BMitch on run docker exec from swarm manager, you could also configure the Docker daemon to use HTTP and than connect to every node without the need of ssh. But you should protect this using TLS authentication which is already integrated into Docker. Afterwards you would be able to execute the docker exec like this:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=$HOST:2376 exec $containerId $cmd
Using skopos-plugin-swarm-exec
There exists a github project which claims to solve the problem and provide the desired functionality binding the docker daemon:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
datagridsys/skopos-plugin-swarm-exec \
task-exec <taskID> <command> [<arguments>...]
As far as I can see, this works by creating another container at same node where the container reside where the docker exec should by executed on. On this node this container mounts the docker daemon socket to be able to execute docker exec there locally.
For more information have a look at: skopos-plugin-swarm-exec
Using docker swarm helpers
There is also another project called docker swarm helpers which seems to be more or less a wrapper around ssh and docker exec.
Reference:
https://github.com/docker/swarmkit/issues/1895#issuecomment-302147604
https://github.com/docker/swarmkit/issues/1895#issuecomment-358925313
You can jump in a Swarm node and list the docker containers running using:
docker container ls
That will give you the container name in a format similar to: containername.1.q5k89uctyx27zmntkcfooh68f
You can then use the regular exec option to run commands on it:
docker container exec -it containername.1.q5k89uctyx27zmntkcfooh68f bash
created a small script for our docker swarm cluster.
this script takes 3 params. first is the service you want to connect to second the task you want to run this can be /bin/bash or any other process you want to run. Third is optional and will fill -c option for bash or sh
-n is optional to force it to connect to a node
it retrieves the node that runs the service and runs the command.
#! /bin/bash
set -e
task=${1}
service=$2
bash=$3
serviceID=$(sudo docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(sudo docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
sudo docker -H $node exec -it $service".1."$serviceID $bash -c "$task"
note: this requires the docker nodes to accept tcp connections by exposing docker on port 2375 on the worker nodes
For those who have multiple replicas and just want to run a command within any of them, here is another shortcut:
docker exec -it $(docker ps -q -f name=SERVICE_NAME | head -1) bash
I wrote script to exec command in docker swarm by service name. For example it can be used in cron. Also you can use bash pipelines and passes all params to docker exec command. But works only on same node where service started. I wish it could help someone
#!/bin/bash
# swarm-exec.sh
set -e
for ((i=1;i<=$#;i++)); do
val=${!i}
if [ ${val:0:1} != "-" ]; then
service_id=$(docker ps -q -f "name=$val");
if [[ $service_id == "" ]]; then
echo "Container $val not found!";
exit 1;
fi
docker exec ${#:1:$i-1} $service_id ${#:$i+1:$#};
exit 0;
fi
done
echo "Usage: $0 [OPTIONS] SERVICE_NAME COMMAND [ARG...]";
exit 1;
Example of using:
./swarm-exec.sh app_postgres pg_dump -Z 9 -F p -U postgres app > /backups/app.sql.gz
echo ls | ./swarm-exec.sh -i app /bin/bash
./swarm-exec.sh -it some_app /bin/bash
The simpliest command I found to docker exec into a swarm node (with a swarm manager at $SWARM_MANAGER_HOST) running the service $SERVICE_NAME (for example mystack_myservice) is the following:
SERVICE_JSON=$(ssh $SWARM_MANAGER_HOST "docker service ps $SERVICE_NAME --no-trunc --format '{{ json . }}' -f desired-state=running")
ssh -t $(echo $SERVICE_JSON | jq -r '.Node') "docker exec -it $(echo $SERVICE_JSON | jq -r '.Name').$(echo $SERVICE_JSON | jq -r '.ID') bash"
This asserts that you have ssh access to $SWARM_MANAGER_HOST as well as the swarm node currently running the service task.
This also asserts that you have jq installed (apt install jq), but if you can't or don't want to install it and you have python installed you can create the following alias (based on this answer):
alias jq="python3 -c 'import sys, json; print(json.load(sys.stdin)[sys.argv[2].partition(\".\")[-1]])'"
See addendum 2...
Example of a oneliner for entering the database my_db on node master:
DB_NODE_ID=master && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
In case you want to configure, say max_connections:
DB_NODE_ID=master && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
This approach allows to enter all database nodes (e.g. slaves) just by setting the DB_NODE_ID variable accordingly.
Example for slave s2:
DB_NODE_ID=s2 && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
or
DB_NODE_ID=s2 && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
Put this into your KiTTY or PuTTY configuration for master / s2 under Data/Command and you are set.
As an addendum:
The old, non swarm mode version reads simply
docker exec -it master mysql my_db
resp.
DB_ID=master && $(docker exec -it $DB_ID mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $DB_ID mysql tmp
Addendum 2:
As it turned out by example, the term docker ps -q -f name=$DB_NODE_ID may return wrong values under certain conditions.
The following approach works correctily:
docker ps -a | grep "_$DB_NODE_ID." | awk '{print $1}'
You may substitute the examples above accordingly.
Addendum 3:
Well, these terms look awful and they certainly are painful to type, so you may want to ease your work. On Linux, everybody knows how to do this. On Windws, you may want to use AHK.
This is the AHK term I use:
:*:ii::DB_NODE_ID=$(docker ps -a | grep "_." | awk '{{}print $1{}}') && docker exec -it $id ash{Left 49}
So when I type ii -- which is as simple as it can get -- I get the desired term with the cursor in place and just have to fill in the container name.
I edited the script Brian van Rooijen added above. Because my reputation is to low, I cannot add it
#! /bin/bash
set -e
service=${1}
shift
task="$*"
echo $task
serviceID=$(docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
serviceName=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Name}}"| head -n1 )
docker -H $node exec -it $serviceName"."$serviceID $task
I had the issue that the container didn't exists with the hard coded .1. in the execution.
Take a look at my solution: https://github.com/binbrayer/swarmServiceExec.
This approach is based on Docker Machines. I also created the prototype of the script to call containers asynchronously and as a result simultaneously.

what is 'z' flag in docker container's volumes-from option?

While going through the docker docs, I came across volumes-from (https://docs.docker.com/engine/reference/commandline/run/) option for docker run command.
I didn't understand the differences between ro, rw, and z option provided as-
$ docker run --volumes-from ba8c0c54f0f2:ro -i -t ubuntu pwd
In the above command the ro option is replaced with z. I will be thankful if anyone explores on differences of using these options.
Two suffixes :z or :Z can be added to the volume mount. These suffixes tell Docker to relabel file objects on the shared volumes. The 'z' option tells Docker that the volume content will be shared between containers. Docker will label the content with a shared content label. Shared volumes labels allow all containers to read/write content. The 'Z' option tells Docker to label the content with a private unshared label.
https://github.com/rhatdan/docker/blob/e6473011583967df4aa5a62f173fb421cae2bb1e/docs/sources/reference/commandline/cli.md
If you use selinux you can add the z or Z options to modify the selinux label of the host file or directory being mounted into the container. This affects the file or directory on the host machine itself and can have consequences outside of the scope of Docker.
The z option indicates that the bind mount content is shared among multiple containers.
The Z option indicates that the bind mount content is private and unshared.
Use extreme caution with these options. Bind-mounting a system directory such as /home or /usr with the Z option renders your host machine inoperable and you may need to relabel the host machine files by hand.
$ docker run -d \
-it \
--name devtest \
-v "$(pwd)"/target:/app:z \
nginx:latest
https://docs.docker.com/storage/bind-mounts/#configure-bind-propagation
From tests here in my machine, -z lets you share content from one container with another. Suppose this image:
FROM alpine
RUN mkdir -p /var/www/html \
&& echo "foo" > /var/www/html/index.html
Let's build it and tag as test-z:
$ docker build . -t test-z
Now create and run test-z container with the name testing-z, mapping the volume test-vol to /var/www/html and adding the z modifier
$ docker run \
--name testing-z \
--volume test-vol:/var/www/html:z \
-d test-z tail -f /dev/null
The contents of /var/www/html from testing-z can be accessed from others containers by using the --volumes-from flag, like below:
$ docker run --rm --volumes-from testing-z -it nginx sh
# cat /var/www/html/index.html
foo
Obs.: I'm running Docker version 19.03.5-ce, build 633a0ea838
docker run --volumes-from a64f10cd5f0e:z -i -t rhel6 bin/bash
I have tested it, i have mounted in one container and from that container to another newly container. IT goes with rw option
I've done the following observation:
# docker run --rm -ti -v /host/path/to/flyway/scripts:/flyway/sql:z --entrypoint '' flyway/flyway ls -l /flyway/sql
total 0
# docker run --rm -ti -v /host/path/to/flyway/scripts:/flyway/sql --entrypoint '' flyway/flyway ls -l /flyway/sql
ls: cannot open directory '/flyway/sql': Permission denied
So, in this case, the container works only if :z is set. On this host, SELinux is installed. If this is not the case, the :z doesn't have a recognizable effect to me.
Alternatively to :z, one could use chcon on the host folder to change this permission:
# chcon -t svirt_sandbox_file_t /host/path/to/flyway/scripts
# docker run --rm -ti -v /host/path/to/flyway/scripts:/flyway/sql:z --entrypoint '' flyway/flyway ls -l /flyway/sql
total 0
# docker run --rm -ti -v /host/path/to/flyway/scripts:/flyway/sql --entrypoint '' flyway/flyway ls -l /flyway/sql
total 0

Resources