Mount a windows host directory in compose file version 3 - docker

I trying to upgrade docker-compose.yml from version 1 to version 3.
Main question about
volumes_from: To share a volume between services,
define it using the top-level volumes option and
reference it from each service that shares it using the
service-level volumes option.
Simplest example:
version "1"
data:
image: postgres:latest
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
If I have understood correctly, should be converted to
version: "3"
services:
db:
image: postgres:latest
restart: always
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- appn
networks:
appn:
volumes:
db-data:?
Question: How now in top-level volumes option i can set relative path to folder "example_folder" from windows host to "db-data" ?

In this instance, you might consider not using volumes_from.
As mentioned in this docker 1.13 issue by Sebastiaan van Stijn (thaJeztah):
The volumes_from is basically a "lazy" way to copy volume definitions from one container to another, so;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --volumes-from=one image-two
Is the same as running;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two
If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;
version: "3.0"
services:
db:
image: nginx
volumes:
- uploads-data:/usr/share/nginx/html/uploads/
volumes:
uploads-data:
Which you can run with docker-compose;
docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1
Basically, it is not available in docker compose version 3:
There's a couple of reasons volumes_from is not ported to the compose-file "3";
In a swarm, there is no guarantee that the "from" container is running on the same node. Using volumes_from would not lead to the expected result.
This is especially the case with bind-mounts, which, in a swarm, have to exist on the host (are not automatically created)
There is still a "race" condition (as described earlier)
The "data" container has to use exactly the right paths for volumes as the "app" container that uses the volumes (i.e. if the "app" uses the volume in /some/path/in/container, then the data container also has to have the volume at /some/path/in/container). There are many cases where the volume may be shared by multiple services, and those may be consuming the volume in different paths.
But also, as mentioned in issue 19990:
The "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.
For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.
Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.
For your question, you would need to define a docker volume container and copy your host content in it:
services:
data:
image: "nginx:alpine"
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

Related

How can I preserve volumes in docker?

I'm using docker-compose down and my question is how can I save my volumes when I execute this command?
By default, docker-compose down should not remove any volume unless you add --volumes (or -v) flag (see the docs). However, you can set volumes as external, which will always prevent them from deletion:
volumes:
myapp:
external: true
You can find this example in official docker volumes documentation.
The docker-compose down command stops containers and removes containers, networks, volumes, and images created by up.
By default, the only things removed are:
Containers for services defined in the Compose file
Networks defined in the networks section of the Compose file
The default network, if one is used
Networks and volumes defined as external are never removed.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true

How fast do the files from a docker image get copied to a named volume after container initialization

I have a stack of containers that are sharing a named volume. The image that contains the files is built to contain code (multiple libraries, thousands of classes).
The issue I am facing is that when I deploy the stack to a docker swarm mode cluster, the containers initialize before the files are fully copied to the volume.
Is there a way to tell that the volume is ready and all files mounted have been copied? I would have assumed that the containers would only get created after the volume is ready, but this does not seem to be the case.
I have an install command that runs in one of the containers sharing that named volume and this fails because the files are not there yet.
version: '3.3'
services:
php:
image: code
volumes:
- namedvolume:/var/www/html
web:
image: nginx
volumes:
- namedvolume:/var/www/html
install:
image: code
volumes:
- namedvolume:/var/www/html
command: "/bin/bash -c \"somecommand\""
volumes:
namedvolume:
Or is there something i am doing wrong?
Thanks

Docker compose not mounting volume?

If I run this command the volume mounts and the container starts as expected with initialized state:
docker run --name gogs --net mk1net --ip 203.0.113.3 -v gogs-data:/data -d gogs/gogs
However if I run the corresponding docker-compose script the volume does not mount. The container still starts up, but without the state it reads on startup.
version: '3'
services:
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
Any ideas?
Looking at your command, the gogs-data volume was defined outside the docker compose file, probably using something like:
docker volume create gogs-data
If so then you need to specify it as external inside your docker compose file like this:
volumes:
gogs-data:
external: true
You can also define a different name for your external volume and keep using current volume name inside your docker compose file to avoid naming conflicts, like for example, let's say your project is about selling cars so you want the external volume to be call selling-cars-gogs-data but want to keep it simple as gogs-data inside your docker compose file, then you can do this:
volumes:
gogs-data:
external:
name: selling-cars-gogs-data
Or even better using environment variable to set the volume name for a more dynamic docker compose design, like this:
volumes:
gogs-data:
external:
name: "${MY_GOGS_DATA_VOLUME}"
And then start your docker compose like this:
env MY_GOGS_DATA_VOLUME='selling-cars-gogs-data' docker-compose up
Hope this helps, here is also a link to the docker compose external volumes documentation in case you want to learn more: https://docs.docker.com/compose/compose-file/#external
You can make pretty much everything external, including container linking to connect to other docker compose containers.

Docker data volume support on Docker Cloud

In local development you can use docker-compose to attach data volume containers to app/db containers like so:
mongo:
image: mongo:3
volumes:
- data:/data/db
ports:
- 27017:27017
- 28017:28017
volumes:
data:
This is pretty great and easy. However, if you want to deploy via Docker Cloud. Their docker-cloud.yml stack files don't allow for this. They throw an error if you try to define data volume containers.
Are data volume containers not supported in Docker Cloud? How are you supposed to persist data and configurations that need to be mounted into your app/db containers?
The code you've posted is for a Docker compose file, but
Docker Cloud doesn't support it (I'm assuming that you're not working in swarm beta mode).
You need to use a stackfile, that isn't a Docker compose file.
You need to use a code like this, that automatically generate a volume for your service:
mongo:
image: mongo:3
volumes:
- /data/db
ports:
- 27017:27017
- 28017:28017
Follow the Docker Cloud stackfile reference for volumes
and take a look at Docker Cloud Volumes documentation to get more information about this.

Docker stack deploy rolling updates volume issue

I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.

Resources