Docker data volume support on Docker Cloud - docker

In local development you can use docker-compose to attach data volume containers to app/db containers like so:
mongo:
image: mongo:3
volumes:
- data:/data/db
ports:
- 27017:27017
- 28017:28017
volumes:
data:
This is pretty great and easy. However, if you want to deploy via Docker Cloud. Their docker-cloud.yml stack files don't allow for this. They throw an error if you try to define data volume containers.
Are data volume containers not supported in Docker Cloud? How are you supposed to persist data and configurations that need to be mounted into your app/db containers?

The code you've posted is for a Docker compose file, but
Docker Cloud doesn't support it (I'm assuming that you're not working in swarm beta mode).
You need to use a stackfile, that isn't a Docker compose file.
You need to use a code like this, that automatically generate a volume for your service:
mongo:
image: mongo:3
volumes:
- /data/db
ports:
- 27017:27017
- 28017:28017
Follow the Docker Cloud stackfile reference for volumes
and take a look at Docker Cloud Volumes documentation to get more information about this.

Related

How can I store data with Docker Compose containers?

I have this docker-compose.yml, and I have a Postgres database and Grafana running over it to make queries on data.
version: "3"
services:
db:
image: postgres
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
grafana:
image: grafana/grafana
container_name: grafana
depends_on:
- db
ports:
- "3000:3000"
I start this compose with the command docker-compose up, but then, if I want to not lose any data, I must run docker-compose stop instead of docker-compose down.
I also read about docker commit, but "the commit operation will not include any data contained in volumes mounted inside the container", so I guess it's no use for my needs.
What's the proper way to store the created volumes and reusing them with commands up/down, so even when recreating the containers? I must use some sort of backup methods provided by every image (so, for example, a DB export for Postgres, and some other type of export for Grafana), or there is a way to do this inside docker-compose.yml?
EDIT:
I also read about volumes, but is there a standard way to store everything?
In the link provided by #DannyB, setting volumes to ./postgres-data:/var/lib/postgresql instead of ./postgres-data:/var/lib/postgresql/data caused the container to not store the actual folder.
My question is: every image must follow a particular pattern like the one above? This path to data to store the volume underlying is present in every Docker image Readme? Or is there something like:
volumes:
- ./my_image_root:/
Docker provides for volumes as the way to persist volumes between container invocations and to share data between containers.
They are quite simple to declare and use in compose files:
volumes:
postgres:
grafana:
services:
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=my_secret_password
volumes:
- postgres:/var/lib/postgresql/data
grafana:
image: grafana/grafana
depends_on:
- db
volumes:
- grafana:/var/lib/grafana
ports:
- "3000:3000"
Optionally, you can also set a local directory as your container volume
with the added convince of having the files easily accessible not only from inside the container. This is especially helpful for mounting specific config files to their location in the container, you can edit the file locally like any other file restart the container with the updated configuration (certificates and other similar files also make good use of this option). And you do that like so:
volumes:
- /home/myusername/postgres_data/:/var/lib/postgresql/data/
PS. I have omitted the container_name and version directives from this compose.yml because (as of docker 20.10), the docker compose spec determines version automatically, and docker compose exposes enough functionality that accessing the containers directly using short names isn't necessary usually.

How can I populate volumes with local content using ECS and docker-compose

The Situation
I am trying to set up a Prometheus / Grafana cluter using AWS ECS. Both Prometheus and Grafana need configuration files. Normally I would use a volume to pass that kind of information to a docker image.
Since these are two services, I would like to use docker-compose to set them both up and tie them together at once.
The Attempt
Here's the compose file that I would use for a normal docker setup:
version: '3.0'
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--storage.tsdb.path=/prometheus'
ports:
- 9090:9090
grafana:
image: grafana/grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
ports:
- 3000:3000
This does not appear to actually work when I run ecs-cli compose service up. Specifically, the tasks start but then crash, and I'm not seeing any evidence that the configuration files were actually injected.
This guide explains how to set up a Prometheus image on ECS, but it is actually creating a configured docker image and publishing that image -- it's not using docker compose.
The Question
Is there a way to inject files (e.g. config files) from my local computer into my ECS images / tasks using docker-compose?
The docker-container should be treated differently when it comes to ECS, the above docker-compose seems fine to working with the local setup but with ECS I will not recommend to go for mounting.
So I will recommend putting the config file into the docker image. for example
FROM prom/prometheus
COPY myconfig.yml /etc/prometheus/prometheus.yml
Also, I will prefer ECR as a docker registry in AWS.
The disadvantage of mounting in case of ECS
You will need to keep config in EC2 instance
You will not able to use in case of fargate as there is no server to manage in fargate
You will be depended in AMI in case of auto-scaling as you docker-container depended on config

Docker secrets within a docker volume

I am trying to setup a Docker-based Jenkins instance. Essentially, I run the jenkins/jenkins:lts image as a container and mount a data volume to persist the data Jenkins will create.
Now, what I would like to do is share the host's ssh keys with this Jenkins instance. It's probably due to my limited Docker knowledge, but my problem is I don't know how I can mount additional files/directories to my volume and Jenkins requires that I put ssh keys within var/jenkins_home/.ssh.
I tried naively creating the directories in Dockerfile and then mounting them with docker-compose. It failed, as you might expect, since the volume is the one containing Jenkins' home directory data, not the Jenkins container itself.
I have the following docker-compose.yml (not working, for the reasons mentioned above):
version: '3.1'
services:
jenkins:
restart: always
build: ./jenkins
environment:
VIRTUAL_HOST: ${NGINX_VIRTUAL_HOST}
VIRTUAL_PORT: 8080
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
TZ: America/New_York
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_data:/var/jenkins_home
networks:
- web
- proxy
healthcheck:
test: ["CMD", "curl --fail http://${NGINX_VIRTUAL_HOST}/ || exit 1"]
interval: 1m
timeout: 10s
retries: 3
secrets:
- host_ssh_key
volumes:
jenkins_data:
networks:
web:
driver: bridge
proxy:
external:
name: nginx-proxy
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
My question is: is there anyway I could get this secret within my data volume?
I know this is a fairly old thread but a lot of people get stuck on this including me and the answer is simply not true. You can indeed use secrets with docker-compose without using Swarm provided it's a local machine or the secrets file is mounted on the host. Not saying this is secure or desirable, just that it can be done. One of the best explanations of the several ways this is possible is this blog;
Using Docker Secrets during Development
Below is an example of parts of a docker compose file used to add an api key to a Spring application. The key are then available at /run/secrets/captcha-api-key inside the Docker container. Docker compose "fakes" it by literally binding the file as a mount which then can be accessed in whatever way. It's not secure as in the file is still there, visible to all with access to /run/secrets but it's definitely doable as a work-around. Great for dev servers but would not do it in production though;
version: '3.6'
services:
myapp:
image: mmyapp
restart: always
secrets:
- captcha-api-key
secrets:
captcha-api-key:
file: ./captcha_api_key.txt
EDIT: Besides that, one can simply just run a one-node swarm which is just a tiny bit more on the resources and use secrets the way they are intended. Provided the images are already built, "docker stack deploy mydocker-composefile.yml mystackname" will do mostly the same as old docker compose did. Note though that the yml file must be written in 3 or higher specification.
Here is a short but concise write-up on compose vs swarm; The Difference Between Docker Compose And Docker Stack
mount the secret like given and try.
secrets:
- source: host_ssh_key
target: /var/jenkins_home/.ssh/id_rsa
mode: 0600
It can't be done. Secrets will only work with docker swarm; docker-compose is unable to use secrets.
More details in this GitHub issue.

Mount a windows host directory in compose file version 3

I trying to upgrade docker-compose.yml from version 1 to version 3.
Main question about
volumes_from: To share a volume between services,
define it using the top-level volumes option and
reference it from each service that shares it using the
service-level volumes option.
Simplest example:
version "1"
data:
image: postgres:latest
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
If I have understood correctly, should be converted to
version: "3"
services:
db:
image: postgres:latest
restart: always
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- appn
networks:
appn:
volumes:
db-data:?
Question: How now in top-level volumes option i can set relative path to folder "example_folder" from windows host to "db-data" ?
In this instance, you might consider not using volumes_from.
As mentioned in this docker 1.13 issue by Sebastiaan van Stijn (thaJeztah):
The volumes_from is basically a "lazy" way to copy volume definitions from one container to another, so;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --volumes-from=one image-two
Is the same as running;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two
If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;
version: "3.0"
services:
db:
image: nginx
volumes:
- uploads-data:/usr/share/nginx/html/uploads/
volumes:
uploads-data:
Which you can run with docker-compose;
docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1
Basically, it is not available in docker compose version 3:
There's a couple of reasons volumes_from is not ported to the compose-file "3";
In a swarm, there is no guarantee that the "from" container is running on the same node. Using volumes_from would not lead to the expected result.
This is especially the case with bind-mounts, which, in a swarm, have to exist on the host (are not automatically created)
There is still a "race" condition (as described earlier)
The "data" container has to use exactly the right paths for volumes as the "app" container that uses the volumes (i.e. if the "app" uses the volume in /some/path/in/container, then the data container also has to have the volume at /some/path/in/container). There are many cases where the volume may be shared by multiple services, and those may be consuming the volume in different paths.
But also, as mentioned in issue 19990:
The "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.
For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.
Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.
For your question, you would need to define a docker volume container and copy your host content in it:
services:
data:
image: "nginx:alpine"
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

Docker stack deploy rolling updates volume issue

I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.

Resources