Docker volume name changed when running stack after changing directory name - docker

I am experimenting with docker on windows and creating a stack for the same.
I just found that when i use docker-compose up -d, docker volume are created with the name like foldername_volumename.
I have a working app for the stack under one folder and just want to change the folder name. But found that while I changed it, it prevent me to use the same volume that was previously used.
I have some configurations and data that I will lose if i will move to another volume name.
Is there any way to reuse the same volume but still able to change the folder name?
What is the best practice?

You can use external: true to let docker compose know that it does not need to create the volume, it already exists (and therefore, the folder name will not be prepended).
version: '3.2'
volumes:
mydata:
external: true
services:
test:
image: alpine
volumes:
- mydata:/data
External volumes documentation

The volume name is based on the project name. By default project name is based on the containing folder's name, but you can override it by doing docker-compose -p yourprojectname. So if you do that you can get consistent volume names regardless of containing folder name.

Related

Exclude sub-folder when mounting host to volume docker [duplicate]

Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file

Setting volumes in docker-compose

I need a way to configure docker-compose to create a volume if it's missing, or in case it exists, use it.
I need it to be persistent between versions, but I cannot assure it'll be configured upon initial configuration.
volumes:
my_volume:
external: true
I need to mount docker volume and not host directory.
something like:
-v my_volume:/my_files
what's the best solution for such use-case?
You can use volume for each application or services you set in docker-compose file. For instance, I set a volume for my nginx server as like.
volumes:
- ./web/public:/srv/www/static
- ./default.conf:/etc/nginx/conf.d/default.conf
The left side before colon are the path of the files or folder I want to store inside my docker image as volume whereas on right side I wrote the path where the files will be stored.
When you build the file for first time it will create volume if in case if doesn't exist or use existing volume if it exist
hope this helps.

Docker-compose and volumes

When I create a volume manually and include it in docker-compose, if I don't prefix the volume tag with docker_, docker compose creates a new volume prefixed with docker_
For example:
I create a volume with:
docker volume create myvolume
It's visible at /var/lib/docker/volumes/myvolume.
I include it in my docker-compose yaml file, but when I run docker-compose, a new volume is created at /var/lib/docker/volumes/docker_myvolume
If I call my volume docker_myvolume and include that in my docker-compose yaml, it uses it and doesn't create it's own.
Is this normal behavior?
Yes, this is normal behavior. When you specify a volume in your docker-compose.yml file without a leading driver_ prefix, Docker Compose will create a new volume with a name that is prefixed with driver_. This is because Docker Compose uses a default driver for creating and managing volumes, which is the local driver.
You can specify a volume in your docker-compose.yml file with the external option to tell Docker Compose to use an existing volume instead of creating a new one. For example:
version: '3'
services:
myservice:
volumes:
- type: volume
source: myvolume
target: /app/data
volume:
external: true
This will tell Docker Compose to use the existing volume myvolume instead of creating a new one.
Yes, Compose generally prefixes things with its project name. This includes containers, networks, and named volumes. In general, if you actually need to interact with these things, there is an equivalent docker-compose command that chooses the correct name (e.g., docker-compose exec).
In general you shouldn't be directly modifying things inside /var/lib/docker. That directory tree is Docker's private state and there are no particular guarantees about the format of files there. If your use case involves directly interacting with the volume files from the host, either use a /host/path:/container/path bind mount or explicitly specify the storage location using volume options.

Add bind mount to Dockerfile just like volume

I want to add the bind mount to docker file just like I initialise a volume inside Dockefile.
Is there any way for it?
A Dockerfile defines how an image is built, not how it's used - so you can't specify the bind mount in a Dockerfile. Try using docker-compose instead. A simple docker-compose.yml that mounts a directory for you would look like this:
version: '3.1'
services:
mycontainer:
image: myimage
build: .
volumes:
- './path/on/docker/host:/path/inside/container'
The build: . is optional if you're building the image by some other means, but sometimes it's handy to do it all in one.
Run this with docker-compose up -d
In addition to what the other answers say:
Because bind mounts provide access to the host filesystem, allowing them to be embedded into an image would be a huge security risk. Consider an image that purports to be, say, a web server, but in fact bind mounts your /etc/passwd and /etc/shadow and then sends them off to a remote server.
Or one that bind mounts /lib/ld-linux.so and then overwrites it, thus breaking your entire system.
For these reasons, you cannot embed a a bind mount in your Dockerfile. Similarly, you cannot specify host port mappings, host device access, or any other similar attributes in the Dockerfile.
Simple answer is no.
A basic design principle for docker images is portablility. Bind mounts are hosts specific since the mounted folder is defined on the host machine. Thus this contradicts with the portablility requirement for Docker images.

How do named volumes work in docker?

I'm struggling to understand how exactly does the named volume work in the following example from docker docs:
version: "3"
services:
db:
image: db
volumes:
#1
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
#2
- data-volume:/var/lib/backup/data
volumes:
data-volume:
My guess is, that the first occurrence of the named volume (#1) defines what is contained inside the volume, while subsequent occurrences (#2) simply share the volume's content with whatever containers they are referenced from.
Is this guess correct?
Listing data-volume: under the top-level volumes: key creates a named volume on the host if it doesn't exist yet. This behaves the following way according to this source
If you create a named volume by running a new container from image by docker run -v my-precious-data:/data imageName, the data within the image/container under /data will be copied into the named volume.
If you create another container binds to an existing named volume, no files from the new image/container will be copied/overwritten, it will use the existing data inside the named volume.
They don’t have a docker command to backup / export a named volume. However you can find out the actual location of the file by “docker volume inspect [volume-name]”.
In case the volume is empty and both containers have data in the target directory the first container to be run will mount its data into the volume and the other container will see that data (and not its own). I don't know which container will run first (although I expect it executes from top to bottom) however you can force an order with depends_on as shown here
------------------- Update
The depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The way that I understand your guess, you are not completely correct.
Declaring and referencing a named volume in a docker-compose file will create an empty volume which may then be accessed and shared by the services saying so in their volumes section.
If you want to share a named volume, you have to declare this volume in the top-level volume section of your docker-compose file. Example (as in the docker docs already linked by yourself):
version: "3"
services:
db:
image: db
volumes:
#1 uses the named and shared volume 'data-volume' created with #3
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
#2 uses the named and shared volume 'data-volume' created with #3
- data-volume:/var/lib/backup/data
volumes:
#3 creates the named volume 'data-volume'
data-volume:
The volume will be empty on start (and therefore the folders in the containers where that volume is mounted to). Its content will be a result of the services acions on runtime.
Hope that made it a bit more clear.

Resources