Docker Compose - bind source path does not exist - docker

In my docker compose service I have the following:
volumes:
- ~/DockerStuff/Projects:/root/Documents/Projects
- ~/DockerStuff/Downloads:/root/Downloads
But when I run docker compose up I'm being told:
Error response from daemon: invalid mount config for type "bind": bind source path does not exist
I keep seeing things saying that you can create bind volumes and if the host directory doesn't exist, Docker shall create it on the fly. But these seem specific to DockerFile setups rather than compose files.
Is such functionality possible in docker compose too? :)

The ~ symbols is not expanded by docker compose.
You have to rely on this approach:
building script
HOME=${HOME} docker-compose ... command options ...
docker compose yaml
volumes:
- ${HOME}/DockerStuff/Projects:/root/Documents/Projects
- ${HOME}/DockerStuff/Downloads:/root/Downloads

In order to create the host folder for the docker-compose volume binding if it doesn't exist just add bind.create_host_path to your volumes section -
volumes:
- type: bind
source: localFolder/subFolderIfNeeded
target: /data
bind:
create_host_path: true
NOTE: Tested on Docker compose 2.15.1

Related

Undefined volume with Docker Compose

I wanted to translate this docker CLI command (from smallstep/step-ca) into a docker-compose.yml file to run with docker compose (version 2):
docker run -d -v step:/home/step \
-p 9000:9000 \
-e "DOCKER_STEPCA_INIT_NAME=Smallstep" \
-e "DOCKER_STEPCA_INIT_DNS_NAMES=localhost,$(hostname -f)" \
smallstep/step-ca
This command successfully starts the container.
Here is the compose file I "composed":
version: "3.9"
services:
ca:
image: smallstep/step-ca
volumes:
- "step:/home/step"
environment:
- DOCKER_STEPCA_INIT_NAME=Smallstep
- DOCKER_STEPCA_INIT_DNS_NAMES=localhost,ubuntu
ports:
- "9000:9000"
When I run docker compose up (again, using v2 here), I get this error:
service "ca" refers to undefined volume step: invalid compose project
Is this the right way to go about this? I'm thinking I missed an extra step with volume creation in docker compose projects, but I am not sure what that would be, or if this is even a valid use case.
The Compose file also has a top-level volumes: block and you need to declare volumes there.
version: '3.9'
services:
ca:
volumes:
- "step:/home/step"
et: cetera
volumes: # add this section
step: # does not need anything underneath this
There are additional options possible, but you do not usually need to specify these unless you need to reuse a preexisting Docker named volume or you need non-standard Linux mount options (the linked documentation gives an example of an NFS-mount volume, for example).
Citing the Compose specification:
To avoid ambiguities with named volumes, relative paths SHOULD always begin with . or ...
So it should be enough to make your VOLUME's host path relative:
services:
ca:
volumes:
- ./step:/home/step
If you don't intend to share the step volume with other containers, you don't need to define it in the top-level volumes key:
If the mount is a host path and only used by a single service, it MAY be declared as part of the service definition instead of the top-level volumes key.
it seems that docker-compose don't know the "volume" you created via command: sudo docker volume create my_xx_volume
so ,just manually mkdir to create a folder and chmod 777 <my_folder>, then your mysql docker will use it very well.
( in production env, don't use chmod but chown to change the file permission )

How to write a script which can run the creation of Docker volumes in one command

In my Docker environment I have always to run the command to create volumes manually like
docker volume create --name= ...
I would like a way to speed up this process with a script shell which could help me to run at once.
If I could see a possible solution would be great as I have many volumes to create manually
A possible solution would be to use docker-compose and have a docker_compose.yml file composed only of volumes but no services:
version: "3.8"
volumes:
logvolume01: {}
logvolume02: {}
logvolume03: {}
When run, this creates the volumes accordingly:
$ docker-compose up
Creating volume "docker_logvolume01" with default driver
Creating volume "docker_logvolume02" with default driver
Creating volume "docker_logvolume03" with default driver
Attaching to
$ docker volume ls
DRIVER VOLUME NAME
local docker_logvolume01
local docker_logvolume02
local docker_logvolume03
If you need a more complex set of options while creating your volumes, you can find them in the documentation.
Just a little quirk to note here: per default, when you are using docker-compose, the volumes will be prefixed with the name of the folder you are in, this is done by Docker so there is no collision between different Docker projects.
This is the reason why, in the example above, the volumes are starting with docker_, because the folder I am in, is called docker.
To fix this, just give a name to your volumes:
version: "3.8"
volumes:
logvolume01:
name: logvolume01
logvolume02:
name: logvolume02
logvolume03:
name: logvolume03
Running this modified version gives:
$ docker-compose up
Creating volume "logvolume01" with default driver
Creating volume "logvolume02" with default driver
Creating volume "logvolume03" with default driver
Attaching to
$ docker volume ls
DRIVER VOLUME NAME
local logvolume01
local logvolume02
local logvolume03

define inline file in docker-compose

I'm currently using a bind mount to mount a file from the host to a container:
volumes:
- type: bind
source: ./localstack_setup.sh
target: /docker-entrypoint-initaws.d/init.sh
Is there a way to define the ./localstack_setup.sh inline in the docker-compose.yml? I want to use a remote Docker host, and docker-compose up fails because the remote host doesn't have the file.
I don't know about any opinion to run a script into docker-compose itself. I recommend you to parametrize your shell script with ENVRIOMENT variable replacers to be in general in terms of the native docker image.

Airflow how to mount airflow.cfg in docker container

I'm running airflow in a docker container and want to mount my airflow.cfg as a volume so I can quickly edit the configuration without rebuilding my image or editing directly in the running container. I'm able to mount my airflow.cfg as a volume and my airflow webserver successfully reads the configuration from it on start up. However, when I edit on the host changes aren't reflected inside the docker container.
The output for findmnt -M airflow.cfg inside the docker container returns:
TARGET SOURCE FSTYPE OPTIONS
/usr/local/airflow/airflow.cfg /dev/sda1[/host/path/airflow/airflow.cfg~//deleted] ext4 rw,relatim
From that output it seems like airflow.cfg continues to point to the original unedited version of airflow.cfg. Is there any workaround to allow updating the config file from the host machine?
I'm using the LocalExecutor compose file from the puckel github repo as a base. I modify it to mount airflow.cfg in the compose file instead of copying it in the Dockerfile.
I had the same issue and I solved it by adding the following line to docker-compose.yml, under the webserver service
- volumes:
- ./config/airflow.cfg:/opt/airflow/airflow.cfg
I have my config file in a folder called config where the docker-compose.yml file is.
In order to quick change airflow config inside a docker container,There are many ways. instead of change airflow.cfg, you can change environment variable directly. In docker container, it can very easy to revise in docker-compose.yml directly.
And you can just restart the docker-compose quickly.
Here is some common configuration variable:
dag_folder : AIRFLOW__CORE__DAGS_FOLDER
sql_alchemy_conn: AIRFLOW__CORE__SQL_ALCHEMY_CONN
executor:AIRFLOW__CORE__EXECUTOR all configuration variable could be found at official doc
Below is my airflow docker-compose snips
webserver:
image: apache/airflow:1.10.12
depends_on:
- postgres
environment:
- AIRFLOW_HOME=/opt/airflow
- AIRFLOW__CORE_dags_folder=/opt/airflow/dags
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql://airflow:airflow#postgres/airflow
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__FERNET_KEY=#####youkey################
volumes:
- ./dags:/opt/airflow/dags
command: webserver

How to add a volume from a diferent server to my docker-compose file

I'm setting to set up a dokcer-compose file, and want to access a docker volume which is on another server. How do I specify that external volume in my docker-compose file?
I've tried using the driver_opts in my docker-compose file but without luck. I always get this error:
ERROR: for api Cannot start service api: error while mounting volume '/var/lib/docker/volumes/data-api-media': error while mounting volume with options: type='' device='remote-path' o='addr=...,rw': no such device
And with external I get:
Volume data-api-media declared as external, but could not be found. Please create the volume manually using docker volume create --name=data-api-media and try again.
version: '3.4'
services:
api:
build: .
entrypoint:
- ./docker-entrypoint.sh
volumes:
- data-api-media:/usr/src/app/media/
ports:
- "1095:1095"
volumes:
data-api-media:
driver_opts:
o: "addr=...,rw"
device: remote-path
I expect to mount the external docker volume from a different server to my docker-compose service and access the files in it.
You can create docker volume through vieux/sshfs plugin at your host and map to another host.
Use a volume driver
When you create a volume using docker volume create, or when you start a container which uses a not-yet-created volume, you can specify a volume driver. The following examples use the vieux/sshfs volume driver, first when creating a standalone volume, and then when starting a container which creates a new volume.
Initial set-up
This example assumes that you have two nodes, the first of which is a Docker host and can connect to the second using SSH.
On the Docker host, install the vieux/sshfs plugin:
$ docker plugin install --grant-all-permissions vieux/sshfs
Create a volume using a volume driver
This example specifies a SSH password, but if the two hosts have shared keys configured, you can omit the password. Each volume driver may have zero or more configurable options, each of which is specified using an -o flag.
$ docker volume create --driver vieux/sshfs \
-o sshcmd=test#node2:/home/test \
-o password=testpassword \
sshvolume
docker-compose setup
volumes:
- type: volume
driver: vieux/sshfs
source: sshvolume
target: /target

Resources