docker-compose volumes not mounting directories, only files inside directory - docker

Might sound weird, but I am trying to get docker-compose to mount directories to a volume I created
volumes:
- backend-data:/app/migrations/autogen-migrations
- backend-data:/app/seeds/autogen-seeds
- backend-data:/app/server/public
- backend-data:/app/server/src/services/location
Only problem is, that instead of simply mapping the folders to the volume, it's mounting the content inside the volume. Is there any way to tell docker-compose to copy/map the folder itself?
Edit:
Already tried doing
backend-data/autogen-migrations:/app/migrations/autogen-migrations
And I get the following error:
Named volume "backend-data/autogen-migrations:/app/migrations/autogen-migrations:rw" is used in service "backend" but no declaration was found in the volumes section.
Btw this is how my volumes are declared
volumes:
backend-data:
driver: local

When you need mount a local directory as Docker volume, you have use this syntax in your service:
volumes:
- ./your/local/path:/app/migrations/autogen-migrations
In this way, Docker create a local path "./your/local/path" in the same folder where docker-compose.yml file is, to store the data of mounted volume. In this case you don't need specify the volume section in the docker-compose.yml because you manage the volumes by yourself.
If you need mount more than one folder, remember also to mount more than one local folder:
volumes:
- ./your/local/migrations/or/whatever/you/want:/app/migrations/autogen-migrations
- ./your/local/seeds:/app/seeds/autogen-seeds
- ./your/local/server/public:/app/server/public
- ./your/local/server/src:/app/server/src/services/location
You can also aggregate mounts folder under the same folder:
volumes:
- ./your/local/migrations:/app/migrations/autogen-migrations
- ./your/local/seeds:/app/seeds/autogen-seeds
- ./your/local/server:/app/server
In this case into './your/local/server' you found all '/app/server/' content.
If you use the syntax:
volumes:
backend-data:
driver: local
you tell Docker: "Docker, please, mount the folder which correspond to backend-data: (in your case backend-data:/app/migrations/autogen-migrations) when you want and store my data!". In this case, Docker manage the "local" folder by itself without use the the same folder where docker-compose.yml file is.

Related

Exclude sub-folder when mounting host to volume docker [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

Mounting a single file from an NFS docker volume into a container

Example (many options omitted for brevity):
version: "3"
volumes:
traefik:
driver: local
driver_opts:
type: nfs
o: "addr=192.168.1.100,soft,rw,nfsvers=4,async"
device: ":/volume/docker/traefik"
services:
traefik:
volumes:
- traefik/traefik.toml:/traefik.toml
This errors out as there is no volume with the name traefik/traefik.toml meaning that the volume name must be the full path to the file (i.e. you can't append a path to the volume name)?
Trying to set device: ":/volume/docker/traefik/traefik.toml" just returns a not a directory error.
Is there a way to take a single file and mount it into a container?
You cannot mount a file or sub-directory within a named volume, the source is either the named volume or a host path. NFS itself, along with most filesystems you'd mount in Linux, require you to mount an entire filesystem, not a single file, and when you get down to the inode level, this is often a really good thing.
The options remaining that I can think of are to mount the entire directory somewhere else inside your container, and symlink to the file you want. Or to NFS mount the directory to the host and do a host mount (bind mount) to a specific file.
However considering the example you presented, using a docker config would be my ideal solution, removing the NFS mount entirely, and getting a read only copy of the file that's automatically distributed to whichever node is running the container.
More details on configs: https://docs.docker.com/engine/swarm/configs/
I believe I found the issue!
Wrong:
volumes:
- traefik/traefik.toml:/traefik.toml
Correct:
volumes:
- /traefik/traefik.toml:/traefik.toml
Start the volume with "/"

use volume defined in Dockerfile from docker-compose

I have for example this service and volume defined in my docker-compose file
postgres:
image: postgres:9.4
volumes:
- db_data:/var/lib/postgresql/data
volumes:
blue_prod_db:
driver: rancher-nfs
Then. if you define a volume inside a Dockerfile like this:
RUN mkdir /stuff
COPY ./stuff/* /stuff/
VOLUME /stuff
How can you later access it through the docker-compose configuration and add it to a container?
When configured in the Dockerfile, a volume will result in any container started from that image, including temporary containers later in the build process from the RUN command, to have a volume defined at the specified location, e.g. /stuff. If you do not define a source for that volume at run time, you will get an anonymous volume created by docker for you at that location. However, you can always define a volume with a source at run time (even without the volume being defined) by specifying the location in your compose file:
version: "3"
services:
app:
image: your_image
volumes:
- data:/stuff
volumes:
data:
Note that there are two volumes sections, one for a specific service that specifies where the volume is mounted inside the container, and another at the top level where you can specify the source of the volume. Without specifying a source, you'll get a local volume driver with a directory under /var/lib/docker bind mounted into the container.
I do not recommend specifying volumes inside the Dockerfile in general, it breaks the ability to extend the image in later steps for child images, and clutters the filesystem with anonymous volumes that are not easy to track back to their origin. It's best to define them at runtime with something like a compose file.

How do named volumes work in docker?

I'm struggling to understand how exactly does the named volume work in the following example from docker docs:
version: "3"
services:
db:
image: db
volumes:
#1
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
#2
- data-volume:/var/lib/backup/data
volumes:
data-volume:
My guess is, that the first occurrence of the named volume (#1) defines what is contained inside the volume, while subsequent occurrences (#2) simply share the volume's content with whatever containers they are referenced from.
Is this guess correct?
Listing data-volume: under the top-level volumes: key creates a named volume on the host if it doesn't exist yet. This behaves the following way according to this source
If you create a named volume by running a new container from image by docker run -v my-precious-data:/data imageName, the data within the image/container under /data will be copied into the named volume.
If you create another container binds to an existing named volume, no files from the new image/container will be copied/overwritten, it will use the existing data inside the named volume.
They don’t have a docker command to backup / export a named volume. However you can find out the actual location of the file by “docker volume inspect [volume-name]”.
In case the volume is empty and both containers have data in the target directory the first container to be run will mount its data into the volume and the other container will see that data (and not its own). I don't know which container will run first (although I expect it executes from top to bottom) however you can force an order with depends_on as shown here
------------------- Update
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The way that I understand your guess, you are not completely correct.
Declaring and referencing a named volume in a docker-compose file will create an empty volume which may then be accessed and shared by the services saying so in their volumes section.
If you want to share a named volume, you have to declare this volume in the top-level volume section of your docker-compose file. Example (as in the docker docs already linked by yourself):
version: "3"
services:
db:
image: db
volumes:
#1 uses the named and shared volume 'data-volume' created with #3
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
#2 uses the named and shared volume 'data-volume' created with #3
- data-volume:/var/lib/backup/data
volumes:
#3 creates the named volume 'data-volume'
data-volume:
The volume will be empty on start (and therefore the folders in the containers where that volume is mounted to). Its content will be a result of the services acions on runtime.
Hope that made it a bit more clear.

Relative path not working with named volumes in the docker-compose.yml

I need to make a named volume use a relative path to the folder where the docker-compose command is executed.
Here is the volume definition in the docker-compose.yml
volumes:
esdata1:
driver: local
driver_opts:
type: none
device: ./esdata1
o: bind
It seems that docker-compose do not create the folder if it does not exist, but even when the folder is created before lauching docker I'm always getting this error:
ERROR: for esdata Cannot create container for service esdata: error while mounting volume with options: type='none' device='./esdata1' o='bind': no such file or directory
NOTE: This is maybe silly, but esdata is the service that use the named volume
esdata:
...
volumes:
- esdata1:/usr/share/elasticsearch/data
...
What I'm missing here?
Maybe the relative path ./ does not point to the folder where the docker-compose is executed (I've tried with ~/ to use a folder relative the user's home but I got the same error).
Thanks in advance,
PS: If I use an absolute path it works like a charm
I encountered exactly the same issue. It seems that you have done nothing wrong. This is just not yet implemented in Docker : https://github.com/docker/compose/issues/6343
Not very nice for portability...
If you use a named bind mount like this, you must include the full path to the file, e.g.:
volumes:
esdata1:
driver: local
driver_opts:
type: none
device: /home/username/project/esdata1
o: bind
That folder must also exist in advance. This is how the linux bind mount syscall works, and when you pass flags like this, you are talking directly to Linux without any path expansion by docker or compose.
If you just want to mount the directory from the host, using a host volume will be expanded for the relative path by compose:
esdata:
...
volumes:
- ./esdata1:/usr/share/elasticsearch/data
...
While the host volume is easier for portability since the path is automatically expanded, you will lose the feature from named volumes where docker initializes an empty named volume with the image contents (including file permissions/ownership).

Resources