Docker secrets within a docker volume - docker

I am trying to setup a Docker-based Jenkins instance. Essentially, I run the jenkins/jenkins:lts image as a container and mount a data volume to persist the data Jenkins will create.
Now, what I would like to do is share the host's ssh keys with this Jenkins instance. It's probably due to my limited Docker knowledge, but my problem is I don't know how I can mount additional files/directories to my volume and Jenkins requires that I put ssh keys within var/jenkins_home/.ssh.
I tried naively creating the directories in Dockerfile and then mounting them with docker-compose. It failed, as you might expect, since the volume is the one containing Jenkins' home directory data, not the Jenkins container itself.
I have the following docker-compose.yml (not working, for the reasons mentioned above):
version: '3.1'
services:
jenkins:
restart: always
build: ./jenkins
environment:
VIRTUAL_HOST: ${NGINX_VIRTUAL_HOST}
VIRTUAL_PORT: 8080
JAVA_OPTS: -Djenkins.install.runSetupWizard=false
TZ: America/New_York
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- jenkins_data:/var/jenkins_home
networks:
- web
- proxy
healthcheck:
test: ["CMD", "curl --fail http://${NGINX_VIRTUAL_HOST}/ || exit 1"]
interval: 1m
timeout: 10s
retries: 3
secrets:
- host_ssh_key
volumes:
jenkins_data:
networks:
web:
driver: bridge
proxy:
external:
name: nginx-proxy
secrets:
host_ssh_key:
file: ~/.ssh/id_rsa
My question is: is there anyway I could get this secret within my data volume?

I know this is a fairly old thread but a lot of people get stuck on this including me and the answer is simply not true. You can indeed use secrets with docker-compose without using Swarm provided it's a local machine or the secrets file is mounted on the host. Not saying this is secure or desirable, just that it can be done. One of the best explanations of the several ways this is possible is this blog;
Using Docker Secrets during Development
Below is an example of parts of a docker compose file used to add an api key to a Spring application. The key are then available at /run/secrets/captcha-api-key inside the Docker container. Docker compose "fakes" it by literally binding the file as a mount which then can be accessed in whatever way. It's not secure as in the file is still there, visible to all with access to /run/secrets but it's definitely doable as a work-around. Great for dev servers but would not do it in production though;
version: '3.6'
services:
myapp:
image: mmyapp
restart: always
secrets:
- captcha-api-key
secrets:
captcha-api-key:
file: ./captcha_api_key.txt
EDIT: Besides that, one can simply just run a one-node swarm which is just a tiny bit more on the resources and use secrets the way they are intended. Provided the images are already built, "docker stack deploy mydocker-composefile.yml mystackname" will do mostly the same as old docker compose did. Note though that the yml file must be written in 3 or higher specification.
Here is a short but concise write-up on compose vs swarm; The Difference Between Docker Compose And Docker Stack

mount the secret like given and try.
secrets:
- source: host_ssh_key
target: /var/jenkins_home/.ssh/id_rsa
mode: 0600

It can't be done. Secrets will only work with docker swarm; docker-compose is unable to use secrets.
More details in this GitHub issue.

Related

How to deploy a docker app to production without using Docker compose?

I have heard it said that
Docker compose is designed for development NOT for production.
But I have seen people use Docker compose on production with bind mounts. Then pull the latest changes from github and it appears live in production without the need to rebuild. But others say that you need to COPY . . for production and rebuild.
But how does this work? Because in docker-compose.yaml you can specify depends-on which doesn't start one container until the other is running. If I don't use docker-compose in production then what about this? How would I push my docker-compose to production (I have 4 services / 4 images that I need to run). With docker-compose up -d it is so easy.
How do I build each image individually?
How can I copy these images to my production server to run them (in correct order)? I can't even find the build images on my machine anywhere.
This is my docker-compose.yaml file that works great for development
version: '3'
services:
# Nginx client server
nginx-client:
container_name: nginx-client
build:
context: .
restart: always
stdin_open: true
environment:
- CHOKIDAR_USEPOLLING=true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
networks:
- app-network
# MySQL server for the server side app
mysql-server:
image: mysql:5.7.22
container_name: mysql-server
restart: always
tty: true
ports:
- "16427:3306"
environment:
MYSQL_USER: root
MYSQL_ROOT_PASSWORD: BcGH2Gj41J5VF1
MYSQL_DATABASE: todo
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
networks:
- app-network
# Nginx server for the server side app
nginx-server:
container_name: nginx-server
image: nginx:1.17-alpine
restart: always
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
depends_on:
- php-server
- mysql-server
networks:
- app-network
# PHP server for the server side app
php-server:
build:
context: .
dockerfile: ./docker/php-server/Dockerfile
container_name: php-server
restart: always
tty: true
environment:
SERVICE_NAME: php
SERVICE_TAGS: dev
working_dir: /var/www
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
networks:
- app-network
depends_on:
- mysql-server
# Networks
networks:
app-network:
driver: bridge
How do you build the docker images? I assume you don't plan using a registry, therefore you'll have to:
give an image name to all services
build the docker images somewhere (a CI/CD server, locally, it does not really matter)
save the images in a file
zip the file
export the zipped file remotely
on the server, unzip and load
I'd create a script for this. Something like this:
#!/bin/bash
set -e
docker-compose build
docker save -o images.tar "$( grep "image: .*" docker-compose.yml | awk '{ print $2 }' )"
gzip images.tar
scp images.tar.gz myserver:~
ssh myserver ./load_images.sh
-----
on myserver, the load_images.sh would look like this:
```bash
#!/bin/bash
if [ ! -f images.tar.gz ] ; then
echo "no file"
exit 1
fi
gunzip images.tar.gz
docker load -i images.tar
Then you'll have to create the docker commands to emulate the docker-compose configuration (I won't go there since it's nothing difficult but it's boring and I'm not feeling like writing that). How do you simulate the depends_on? Well, you'll have to start each container singularly so you'll either prepare another script or you'll do it manually.
About using docker-compose on production:
There's not really a big issue about using docker-compose on production as soon as you do it properly. e.g. some of my production setups tends to look like this:
docker-compose.yml
docker-compose.dev.yml
docker-compose.prd.yml
The devs will use docker-compose -f docker-compose.yml -f docker-compose.dev.yml $cmd while on production you'll use docker-compose -f docker-compose.yml -f docker-compose.prd.yml $cmd.
Taking you file as an example, I'd move all volumes, ports, tty and stdin_open subsections from docker-compose.yml to docker-compose.dev.yml. e.g.
the docker-compose.dev.yml would look like this:
version: '3'
services:
nginx-client:
stdin_open: true
ports:
- 28874:3000
volumes:
- ./client:/var/www
- /var/www/node_modules
mysql-server:
tty: true
ports:
- "16427:3306"
volumes:
- ./docker/mysql-server/my.cnf:/etc/mysql/my.cnf
nginx-server:
ports:
- 49691:80
volumes:
- ./server:/var/www
- ./docker/nginx-server/etc/nginx/conf.d:/etc/nginx/conf.d
php-server:
restart: always
tty: true
volumes:
- ./server:/var/www
- ./docker/php-server/local.ini:/usr/local/etc/php/conf.d/local.ini
- /var/www/vendor
on production, the docker-compose you'll have the strictly required port subsections, define a production environment file where the required passwords are stored (the file will be only on the production server, not in git), etc etc.
Actually, you have so many different approaches you can take.
Generally, docker-compose is used as a container-orchestration tool on development. There are several other production-grade container orchestration tools available on most of the popular hosting services like GCP and AWS. Kubernetes is by far the most popular and most commonly used.
Based on the services used in your docker-compose, it advisable to not use it directly on production. Running a mysql container can lead to issues with data loss as containers are meant to be temporary. It is better to opt for a managed MySQL service like RDS instead. Similarly nginx is also better set up with any reverse-proxy/load-balancer services that your hosting service provides.
When it comes to building the images you can utilise your CI/CD pipeline to build these images from their respective Dockerfiles, and then push to a image registry of your choice and let your hosting service pick up the image and deploy it with th e container-orchestration tool that your hosting service provides.
If you need a lightweight production environment, using Compose is probably fine. Other answers here have hinted at more involved tools, that have advantages like supporting multiple-host clusters and zero-downtime deployments, but they are much more involved.
One core piece missing from your description is an image registry. Docker Hub fits this role, if you want to use it; major cloud providers have one; even GitHub has a container registry now (for public repositories); or you can run your own. This addresses a couple of your problems: (2) you docker build the images locally (or on a dedicated continuous-integration system) and docker push them to the registry, then (3) you docker pull the images on the production system, or let Docker do it on its own.
A good practice that goes along with this is to give each build a unique tag, perhaps a date stamp or commit ID. This makes it very easy to upgrade (or downgrade) by changing the tag and re-running docker-compose up.
For this you'd change your docker-compose.yml like:
services:
nginx-client:
# No `build:`
image: registry.example.com/nginx:${NGINX_TAG:latest}
php-server:
# No `build:`
image: registry.example.com/php:${PHP_TAG:latest}
And then you can update things like:
docker build -t registry.example.com/nginx:20201101 ./nginx
docker build -t registry.example.com/php:20201101 ./php
docker push registry.example.com/nginx:20201101 registry.example.com/php:20201101
ssh production-system.example.com \
NGINX_TAG=20201101 PHP_TAG=20201101 docker-compose up -d
You can use multiple docker-compose.yml files to also use docker-compose build and docker-compose push for your custom images, with a development-only overlay file. There is an example in the Docker documentation.
Do not separately copy your code; it's contained in the image. Do not bind-mount local code over the image code. Especially do not use an anonymous volume to hold libraries, since this will completely ignore any updates in the underlying image. These are good practices in development too, since if you replace everything interesting in an image with volume mounts then it doesn't really have any relation to what you're running in production.
You will need to separately copy the configuration files you reference and the docker-compose.yml itself to the target system, and take responsibility for backing up the database data.
Finally, I'd recommend removing unnecessary options from the docker-compose.yml file (don't manually specify container_name:, use the Compose-provided default network, prefer specifying the command: in an image, and so on). That's not essential but it can help trim down the size of the YAML file.

How can I populate volumes with local content using ECS and docker-compose

The Situation
I am trying to set up a Prometheus / Grafana cluter using AWS ECS. Both Prometheus and Grafana need configuration files. Normally I would use a volume to pass that kind of information to a docker image.
Since these are two services, I would like to use docker-compose to set them both up and tie them together at once.
The Attempt
Here's the compose file that I would use for a normal docker setup:
version: '3.0'
volumes:
prometheus_data: {}
grafana_data: {}
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--storage.tsdb.path=/prometheus'
ports:
- 9090:9090
grafana:
image: grafana/grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
env_file:
- ./grafana/config.monitoring
ports:
- 3000:3000
This does not appear to actually work when I run ecs-cli compose service up. Specifically, the tasks start but then crash, and I'm not seeing any evidence that the configuration files were actually injected.
This guide explains how to set up a Prometheus image on ECS, but it is actually creating a configured docker image and publishing that image -- it's not using docker compose.
The Question
Is there a way to inject files (e.g. config files) from my local computer into my ECS images / tasks using docker-compose?
The docker-container should be treated differently when it comes to ECS, the above docker-compose seems fine to working with the local setup but with ECS I will not recommend to go for mounting.
So I will recommend putting the config file into the docker image. for example
FROM prom/prometheus
COPY myconfig.yml /etc/prometheus/prometheus.yml
Also, I will prefer ECR as a docker registry in AWS.
The disadvantage of mounting in case of ECS
You will need to keep config in EC2 instance
You will not able to use in case of fargate as there is no server to manage in fargate
You will be depended in AMI in case of auto-scaling as you docker-container depended on config

Docker data volume support on Docker Cloud

In local development you can use docker-compose to attach data volume containers to app/db containers like so:
mongo:
image: mongo:3
volumes:
- data:/data/db
ports:
- 27017:27017
- 28017:28017
volumes:
data:
This is pretty great and easy. However, if you want to deploy via Docker Cloud. Their docker-cloud.yml stack files don't allow for this. They throw an error if you try to define data volume containers.
Are data volume containers not supported in Docker Cloud? How are you supposed to persist data and configurations that need to be mounted into your app/db containers?
The code you've posted is for a Docker compose file, but
Docker Cloud doesn't support it (I'm assuming that you're not working in swarm beta mode).
You need to use a stackfile, that isn't a Docker compose file.
You need to use a code like this, that automatically generate a volume for your service:
mongo:
image: mongo:3
volumes:
- /data/db
ports:
- 27017:27017
- 28017:28017
Follow the Docker Cloud stackfile reference for volumes
and take a look at Docker Cloud Volumes documentation to get more information about this.

Mount a windows host directory in compose file version 3

I trying to upgrade docker-compose.yml from version 1 to version 3.
Main question about
volumes_from: To share a volume between services,
define it using the top-level volumes option and
reference it from each service that shares it using the
service-level volumes option.
Simplest example:
version "1"
data:
image: postgres:latest
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
If I have understood correctly, should be converted to
version: "3"
services:
db:
image: postgres:latest
restart: always
volumes:
- db-data:/var/lib/postgresql/data
ports:
- "5432:5432"
networks:
- appn
networks:
appn:
volumes:
db-data:?
Question: How now in top-level volumes option i can set relative path to folder "example_folder" from windows host to "db-data" ?
In this instance, you might consider not using volumes_from.
As mentioned in this docker 1.13 issue by Sebastiaan van Stijn (thaJeztah):
The volumes_from is basically a "lazy" way to copy volume definitions from one container to another, so;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --volumes-from=one image-two
Is the same as running;
docker run -d --name one -v myvolume:/foo image-one
docker run -d --name two -v myvolume:/foo image-two
If you are deploying to AWS you should not use bind-mounts, but use named volumes instead (as in my example above), for example;
version: "3.0"
services:
db:
image: nginx
volumes:
- uploads-data:/usr/share/nginx/html/uploads/
volumes:
uploads-data:
Which you can run with docker-compose;
docker-compose up -d
Creating network "foo_default" with the default driver
Creating volume "foo_uploads-data" with default driver
Creating foo_db_1
Basically, it is not available in docker compose version 3:
There's a couple of reasons volumes_from is not ported to the compose-file "3";
In a swarm, there is no guarantee that the "from" container is running on the same node. Using volumes_from would not lead to the expected result.
This is especially the case with bind-mounts, which, in a swarm, have to exist on the host (are not automatically created)
There is still a "race" condition (as described earlier)
The "data" container has to use exactly the right paths for volumes as the "app" container that uses the volumes (i.e. if the "app" uses the volume in /some/path/in/container, then the data container also has to have the volume at /some/path/in/container). There are many cases where the volume may be shared by multiple services, and those may be consuming the volume in different paths.
But also, as mentioned in issue 19990:
The "regular" volume you're describing is a bind-mount, not a volume; you specify a path from the host, and it's mounted in the container. No data is copied from the container to that path, because the files from the host are used.
For a volume, you're asking docker to create a volume (persistent storage) to store data, and copy the data from the container to that volume.
Volumes are managed by docker (or through a plugin) and the storage path (or mechanism) is an implementation detail, as all you're asking is a storage, that's managed.
For your question, you would need to define a docker volume container and copy your host content in it:
services:
data:
image: "nginx:alpine"
volumes:
- ./pg_hba.conf/:/var/lib/postgresql/data/pg_hba.conf

Docker stack deploy rolling updates volume issue

I'm running docker for a production PHP-FPM/Nginx application, I want to use docker-stack.yml and deploy to a swarm cluster. Here's my file:
version: "3"
services:
app:
image: <MYREGISTRY>/app
volumes:
- app-data:/var/www/app
deploy:
mode: global
php:
image: <MYREGISTRY>/php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
nginx:
image: <MYREGISTRY>/nginx
depends_on:
- php
volumes:
- app-data:/var/www/app
deploy:
replicas: 2
ports:
- "80:80"
volumes:
app-data:
My code is in app container with image from my registry.
I want to update my code with docker service update --image <MYREGISTRY>/app:latest but it's not working the code is not changed.
I guess it uses the local volume app-data instead.
Is it normal that the new container data doesn't override volume data?
Yes, this is the expected behavior. Named volumes are only initialized to the image contents when they are empty (the default state when first created). Updating the volume any time after that point would risk data loss from overwriting or deleting volume data that you explicitly asked to be preserved.
If you need the files to be updated with every new image, then perhaps they shouldn't be in a volume? If you do need these inside a volume, then you may need to create a procedure to update the volumes from the image, e.g. if this were a docker run, you could do:
docker run -v app-data:/target --rm <your_registry>/app cp -a /var/www/app/. /target/.
Otherwise, you can delete the volume, or simply remove all files from the volume, and restart your stack to populate it again.
I was having the same issue that I have app and nginx containers sharing the same volume. My current solution having a deploy script which runs
docker service update --mount-add mount service
for app and nginx after docker stack deploy. It will force to update the volume for app and nginx containers.

Resources