How can I preserve volumes in docker? - docker

I'm using docker-compose down and my question is how can I save my volumes when I execute this command?

By default, docker-compose down should not remove any volume unless you add --volumes (or -v) flag (see the docs). However, you can set volumes as external, which will always prevent them from deletion:
volumes:
myapp:
external: true
You can find this example in official docker volumes documentation.

The docker-compose down command stops containers and removes containers, networks, volumes, and images created by up.
By default, the only things removed are:
Containers for services defined in the Compose file
Networks defined in the networks section of the Compose file
The default network, if one is used
Networks and volumes defined as external are never removed.
A volume may be created directly outside of compose with docker volume create and then referenced inside docker-compose.yml as follows:
version: "3.9"
services:
frontend:
image: node:lts
volumes:
- myapp:/home/node/app
volumes:
myapp:
external: true

Related

How to find all unnamed modules

I got docker compose:
version: '2'
services:
elasticsearch:
image: 'elasticsearch:7.9.1'
environment:
- discovery.type=single-node
ports:
- '9200:9200'
- '9300:9300'
volumes:
- /var/lib/docker/volumes/elastic_search_volume:/usr/share/elasticsearch/data:rw
When I run:
docker volume ls
I see no results. How to list unnamed volumes?
docker volume ls as you've shown it will list all of the volumes that exist.
However, in the docker-compose.yml file you show, you're not creating a named or anonymous volume. Instead, you're creating a bind mount to connect a host directory to the container filesystem space. These aren't considered "volumes" in a technical Docker sense, and a docker volume command won't show or manipulate those.
Reaching directly into /var/lib/docker usually isn't a best practice. It's better to ask Docker Compose to manage the named volume for you:
version: '2'
services:
elasticsearch:
volumes:
# No absolute host path, just the volume name
- elastic_search_volume:/usr/share/elasticsearch/data:rw
volumes:
elastic_search_volume:
# Without this line, Compose will create the volume for you.
# With this line, Compose expects it to already exist; you may
# need to manually `docker volume create elastic_search_volume`.
# external: true

How fast do the files from a docker image get copied to a named volume after container initialization

I have a stack of containers that are sharing a named volume. The image that contains the files is built to contain code (multiple libraries, thousands of classes).
The issue I am facing is that when I deploy the stack to a docker swarm mode cluster, the containers initialize before the files are fully copied to the volume.
Is there a way to tell that the volume is ready and all files mounted have been copied? I would have assumed that the containers would only get created after the volume is ready, but this does not seem to be the case.
I have an install command that runs in one of the containers sharing that named volume and this fails because the files are not there yet.
version: '3.3'
services:
php:
image: code
volumes:
- namedvolume:/var/www/html
web:
image: nginx
volumes:
- namedvolume:/var/www/html
install:
image: code
volumes:
- namedvolume:/var/www/html
command: "/bin/bash -c \"somecommand\""
volumes:
namedvolume:
Or is there something i am doing wrong?
Thanks

Docker compose not mounting volume?

If I run this command the volume mounts and the container starts as expected with initialized state:
docker run --name gogs --net mk1net --ip 203.0.113.3 -v gogs-data:/data -d gogs/gogs
However if I run the corresponding docker-compose script the volume does not mount. The container still starts up, but without the state it reads on startup.
version: '3'
services:
gogs:
image: gogs/gogs
ports:
- "3000:3000"
volumes:
- gogs-data:/data
networks:
mk1net:
ipv4_address: 203.0.113.3
volumes:
gogs-data:
networks:
mk1net:
ipam:
config:
- subnet: 203.0.113.0/24
Any ideas?
Looking at your command, the gogs-data volume was defined outside the docker compose file, probably using something like:
docker volume create gogs-data
If so then you need to specify it as external inside your docker compose file like this:
volumes:
gogs-data:
external: true
You can also define a different name for your external volume and keep using current volume name inside your docker compose file to avoid naming conflicts, like for example, let's say your project is about selling cars so you want the external volume to be call selling-cars-gogs-data but want to keep it simple as gogs-data inside your docker compose file, then you can do this:
volumes:
gogs-data:
external:
name: selling-cars-gogs-data
Or even better using environment variable to set the volume name for a more dynamic docker compose design, like this:
volumes:
gogs-data:
external:
name: "${MY_GOGS_DATA_VOLUME}"
And then start your docker compose like this:
env MY_GOGS_DATA_VOLUME='selling-cars-gogs-data' docker-compose up
Hope this helps, here is also a link to the docker compose external volumes documentation in case you want to learn more: https://docs.docker.com/compose/compose-file/#external
You can make pretty much everything external, including container linking to connect to other docker compose containers.

Docker-compose recreating containers, lost data

In my attempt to extract some logs from a container I edited my docker-compose.yml adding an extra mount pointing to those logs.
After running docker-compose up and recreating the respective image I found out that all of the log files were gone, as the container was completely replaced (something which is quite obvious to me now)
Is there a way to recover the old container?
Also: the docker volumes live under /var/lib/docker/volumes/, where are the root file systems of containers?
Here is a snippet of the docker-compose:
version: '3.3'
services:
some_app:
image: some_image:latest
restart: always
volumes:
- some_image_logs:/var/log
volumes:
some_image_logs: {}

Mount a windows host directory in compose file version 3

I trying to upgrade docker-compose.yml from version 1 to version 3.
Main question about
volumes_from: To share a volume between services,
define it using the top-level volumes option and
reference it from each service that shares it using the
service-level volumes option.
Simplest example:
version "1"
data:
image: postgres:latest
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
If I have understood correctly, should be converted to
version: "3"
services:
db:
image: postgres:latest
restart: always
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- appn
networks:
appn:
volumes:
db-data:?
Question: How now in top-level volumes option i can set relative path to folder "example_folder" from windows host to "db-data" ?
In this instance, you might consider not using volumes_from.
As mentioned in this docker 1.13 issue by Sebastiaan van Stijn (thaJeztah):
The volumes_from is basically a "lazy" way to copy volume definitions from one container to another, so;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --volumes-from=one image-two
Is the same as running;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two
If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;
version: "3.0"
services:
db:
image: nginx
volumes:
- uploads-data:/usr/share/nginx/html/uploads/
volumes:
uploads-data:
Which you can run with docker-compose;
docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1
Basically, it is not available in docker compose version 3:
There's a couple of reasons volumes_from is not ported to the compose-file "3";
In a swarm, there is no guarantee that the "from" container is running on the same node. Using volumes_from would not lead to the expected result.
This is especially the case with bind-mounts, which, in a swarm, have to exist on the host (are not automatically created)
There is still a "race" condition (as described earlier)
The "data" container has to use exactly the right paths for volumes as the "app" container that uses the volumes (i.e. if the "app" uses the volume in /some/path/in/container, then the data container also has to have the volume at /some/path/in/container). There are many cases where the volume may be shared by multiple services, and those may be consuming the volume in different paths.
But also, as mentioned in issue 19990:
The "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.
For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.
Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.
For your question, you would need to define a docker volume container and copy your host content in it:
services:
data:
image: "nginx:alpine"
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

Resources