Conditionalizing bind mounted volumes for Docker Compose - docker

Please note: my question mentions MySQL, but it is a Docker/Docker Compose volume management question at heart, and as such, should be answerable by anyone with decent experience in that area, regardless of their familiarity with MySQL.
My understanding is that Dockerized MySQL containers, when defined from inside a Docker Compose file like below, will be ephemeral, meaning they store all data on the container itself (no bind mounts, etc.) and so when the container dies, the data is gone as well:
version: "3.7"
services:
my-service-db:
image: mysql:8
container_name: $MY_SERVICE_DB_HOST
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- $MY_SERVICE_DB_PORT:$MY_SERVICE_DB_PORT
environment:
MYSQL_ROOT_PASSWORD: $MY_SERVICE_DB_ROOT_PASSWORD
MYSQL_DATABASE: my_service_db_$MY_ENV
MYSQL_USER: $MY_SERVICE_DB_APP_USER
MYSQL_PASSWORD: $MY_SERVICE_DB_APP_PASSWORD
other-service-definitions-omitted-for-brevity:
- etc.
To begin with, if that understanding is incorrect, please begin by correcting me! Assuming its more or less correct...
Lets call this Ephemeral Mode.
But by providing a bind mount volume to that service definition, we can specify an external location for where data should be stored, and so the data will persist across service runs (compose ups/downs):
version: "3.7"
services:
my-service-db:
image: mysql:8
container_name: $MY_SERVICE_DB_HOST
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- $MY_SERVICE_DB_PORT:$MY_SERVICE_DB_PORT
environment:
MYSQL_ROOT_PASSWORD: $MY_SERVICE_DB_ROOT_PASSWORD
MYSQL_DATABASE: my_service_db_$MY_ENV
MYSQL_USER: $MY_SERVICE_DB_APP_USER
MYSQL_PASSWORD: $MY_SERVICE_DB_APP_PASSWORD
volumes:
- ./my-service-db-data:/var/lib/mysql
other-service-definitions-omitted-for-brevity:
- etc.
Lets call this Persistent Mode.
There are times when I will want to run my Docker Compose file in Ephemeral Mode, and other times, run it in Persistent Mode.
Is it possible to make the volumes definition (inside the Docker Compose file) conditonal somehow? So that sometimes I can run docker-compose up -d <SPECIFY_EPHEMERAL_MODE_SOMEHOW>, and other times I can run docker-compose up -d <SPECIFY_PERSISTENT_MODE_SOMEHOW>?

You can have multiple Compose files that work together, where you have some base file and then other files that extend the definitions in the base file.
Without extra setup, Compose looks for docker-compose.override.yml alongside the main docker-compose.yml. Since the only difference between the "ephemeral" and "persistent" mode is the volumes: declaration, you can have an override file that only contains that:
# docker-compose.override.yml
version: '3.8'
services:
my-service-db: # matches main docker-compose.yml
volumes: # added to base definition
- ./my-service-db-data:/var/lib/mysql
You could also use this technique to move the actual database credentials and port publishing out of the main file into deploy-specific configuration. It's also somewhat common to use it for setups that need to run a known Docker image in production but build it in development, and for setups that overwrite the container's contents with a host directory.
If you want the file to be named something else, you can, but you need to consistently provide a docker-compose -f option or set the COMPOSE_FILE environment variable every time you run Compose.
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml up -d
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml ps
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml logs app
# Slightly easier (Linux syntax):
export COMPOSE_FILE=docker-compose.yml:docker-compose.persistence.yml
docker-compose up -d
Philosophically, your application's data needs to be persisted somewhere. For application containers, a good practice is for them to be totally stateless (they do not mount volumes:) and push all of their data into a database. That means the database needs to persist data, or else it will get lost when the database restarts.
IME it's a little bit unusual to actively want the database to lose data. This would be more interesting if it were straightforward to create a database image with seeded data, but the standard images are built in a way that makes this difficult. In a test environment, still, I could see wanting it.
It's actually possible, and reasonable, to build an application that runs in Docker but uses an external database. Perhaps you're running in a cloud environment, and your cloud provider has a slightly pricey managed database service that provides automatic snapshots and failover, for example; you could configure your production application to use this managed database and keep no data in containers at all.

Related

Proper way to build a CICD pipeline with Docker images and docker-compose

I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.

Running application within Docker containers

If someone may know, does it need to be separate Dockerfile for a database and service itself in case if you want to run an application within Docker containers?
It's not quite clear where to specify the external database and server name, is it in the .env file?
https://github.com/gurock/testrail-docker/blob/master/README.md
http://docs.gurock.com/testrail-admin/installation-docker/migrating-upgrading-testrail
Yes, you should run both application and Database in a separate container.
It's not quite clear where to specify the external database and server
name, is it in the .env file?
You have two option to speicy Environment variable
.env file
Envrionment Variables
place the .env file in the root of your docker-compose and specify this in your docker-compose file.
services:
api:
image: 'node:6-alpine'
env_file:
- .env
Using Environment
environment:
MYSQL_USER: "${DB_USER:-testrail}"
MYSQL_PASSWORD: "${DB_PWD:-testrail}"
MYSQL_DATABASE: "${DB_NAME:-testrail}"
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PWD:-my-secret-password}"
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
does it need to be separate Dockerfile for a database and service
Better to use offical database image, and for service, you can customize the image, but you provided link is the better choice for you to start with docker-compose.yml.
Also, the documentation of docker-compose is already given in the link.
Theoretically you can have an application and the database running in the same container, but this will have kinds of unintended consequences for example if the database falls over the application might still be running, but docker won't notice that the database fell over if it is not aware of it.
Something to wrap your mind around when running the database in a container is data persistence, which means that data would survive even when the container is killed or deleted and that once you create the container again the container would still be able to access the databases and other data.
Here is a good article explaining volumes in docker in the context of running mysql in its own container with a volume to hold the data:
https://severalnines.com/database-blog/mysql-docker-containers-understanding-basics
In context of the repo that you linked it seems there is a separate Dockerfile for the database and that you have the option to choose to use either Mariadb or MySQL, see here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mariadb
and here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mysql

How to implement changes made to docker-compose.yml to detached running containers

The project is currently running in the background from this command:
docker-compose up -d
I need to make two changes to their docker-compose.yml:
Add a new container
Update a previous container to have a link to the new container
After changes are made:
NOTE the "<--" arrows for my changes
web:
build: .
restart: always
command: ['tini', '--', 'rails', 's']
environment:
RAILS_ENV: production
HOST: example.com
EMAIL: admin#example.com
links:
- db:mongo
- exim4:exim4.docker # <-- Add link
ports:
- 3000:3000
volumes:
- .:/usr/src/app
db:
image: mongo
restart: always
exim4: # <-------------------------------- Add new container
image: exim4
restart: always
ports:
- 25:25
environment:
EMAIL_USER: user#example.com
EMAIL_PASSWORD: abcdabcdabcdabcd
After making the changes, how do I apply them? (without destroying anything)
I tried docker-compose down && docker-compose up -d but this destroyed the Mongo DB container... I cannot do that... again... :sob:
docker-compose restart says it won't recognize any changes made to docker-compose.yml
(Source: https://docs.docker.com/compose/reference/restart/)
docker-compose stop && docker-compose start sounds like it'll just startup the old containers without my changes?
Test server:
Docker version: 1.11.2, build b9f10c9/1.11.2
docker-compose version: 1.8.0, build f3628c7
Production server is likely using older versions, unsure if that will be an issue?
If you just run docker-compose up -d again, it will notice the new container and the changed configuration and apply them.
But:
(without destroying anything)
There are a number of settings that can only be set at container startup time. If you change these, Docker Compose will delete and recreate the affected container. For example, links are a startup-only option, so re-running docker-compose up -d will delete and recreate the web container.
this destroyed the Mongo DB container... I cannot do that... again...
db:
image: mongo
restart: always
Add a volumes: option to this so that data is stored outside the container. You can keep it in a named volume, possibly managed by Docker Compose, which has some advantages, but a host-system directory is probably harder to accidentally destroy. You will have to delete and restart the container to change this option. But note that you will also have to delete and restart the container if, for example, there is a security update in MongoDB and you need a new image.
Your ideal state here is:
Actual databases (like your MongoDB container) store data in named volumes or host directories
Applications (like your Rails container) store nothing locally, and can be freely destroyed and recreated
All code is in Docker images, which can always be rebuilt from source control
Use volumes as necessary to inject config files and extract logs
If you lose your entire /var/lib/docker directory (which happens!) you shouldn't actually lose any state, though you will probably wind up with some application downtime.
Just docker-compose up -d will do the job.
Output should be like
> docker-compose up -d
Starting container1 ... done
> docker-compose up -d
container1 is up-to-date
Creating container2 ... done
As a side note, docker-compose is not really for production. You may want to consider docker swarm.
the key here is that up is idempotent.
if you update configuration in docker-compose.yaml
docker compose up -d
If compose is building images before run it, and you want to rebuild them:
docker compose up -d --build

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

File in docker-entrypoint-initdb.d never get executed when using docker compose

I'm using Docker Toolbox on Windows 10
I can access the php part succesfully via http://192.168.99.100:8000, I have been working around with the mariadb part but still having several problems
I have an sql file as /mariadb/initdb/abc.sql so I should be copied into /docker-entrypoint-initdb.d, after the container is created I use docker-compose exec mariadb to access the container, there is the file as /docker-entrypoint-initdb.d/abc.sql but the file never get executed, I also have tested to import the sql file to the container manually, it was succesful so the sql file is valid
I don't quite understand about the data folder mapping, and what to do to get the folder sync with the container, I always get the warning when recreate the container using docker-compose up -d
WARNING: Service "mariadb" is using volume "/var/lib/mysql" from the previous container. Host mapping "/.../mariadb/data" has no effect. Remove the existing containers (with docker-compose rm mariadb) to use the Recreating db ... done
Questions
How to get the sql file in /docker-entrypoint-initdb.d to be executed ?
What is the right way to map the data folder with the mariadb container ?
Please guide
Thanks
This is my docker-compose.yml
version: "3.2"
services:
php:
image: php:7.1-apache
container_name: web
restart: always
volumes:
- /.../php:/var/www/html
ports:
- "8000:80"
mariadb:
image: mariadb:latest
container_name: db
restart: always
environment:
- MYSQL_ROOT_PASSWORD=12345
volumes:
- /.../mariadb/initdb:/docker-entrypoint-initdb.d
- /.../mariadb/data:/var/lib/mysql
ports:
- "3306:3306"
For me the issue was the fact that Docker didn't clean up my mounted volumes from previous runs.
Doing a:
docker volume ls
Will list any volumes, and if previous exist, then run 'rm' command on the volume to remove it.
As stated on docker mysql docks, scripts in the '/docker-entrypoint-initdb.d' folder is only evalutated the first time the container runs, and if a previous volume remains, it won't run the scripts.
As for the mapping, you simply need to mount your script folder to the '/docker-entrypoint-initdb.d' folder in the image:
volumes:
- ./db/:/docker-entrypoint-initdb.d
I have a single script file in a folder named db, relative to my docker-compose file.
In your Docker file for creating mariaDB, at the end add the abc.sql file to your docker entry point like so:
COPY abc.sql /docker-entrypoint-initdb.d/
Remove the - /.../mariadb/initdb:/docker-entrypoint-initdb.d mapping as any file copied into the entry point will be executed.
Note: Windows containers do not execute anything in docker-entrypoint-initdb.d/

Resources