Running application within Docker containers - docker

If someone may know, does it need to be separate Dockerfile for a database and service itself in case if you want to run an application within Docker containers?
It's not quite clear where to specify the external database and server name, is it in the .env file?
https://github.com/gurock/testrail-docker/blob/master/README.md
http://docs.gurock.com/testrail-admin/installation-docker/migrating-upgrading-testrail

Yes, you should run both application and Database in a separate container.
It's not quite clear where to specify the external database and server
name, is it in the .env file?
You have two option to speicy Environment variable
.env file
Envrionment Variables
place the .env file in the root of your docker-compose and specify this in your docker-compose file.
services:
api:
image: 'node:6-alpine'
env_file:
- .env
Using Environment
environment:
MYSQL_USER: "${DB_USER:-testrail}"
MYSQL_PASSWORD: "${DB_PWD:-testrail}"
MYSQL_DATABASE: "${DB_NAME:-testrail}"
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PWD:-my-secret-password}"
MYSQL_ALLOW_EMPTY_PASSWORD: 'false'
does it need to be separate Dockerfile for a database and service
Better to use offical database image, and for service, you can customize the image, but you provided link is the better choice for you to start with docker-compose.yml.
Also, the documentation of docker-compose is already given in the link.

Theoretically you can have an application and the database running in the same container, but this will have kinds of unintended consequences for example if the database falls over the application might still be running, but docker won't notice that the database fell over if it is not aware of it.
Something to wrap your mind around when running the database in a container is data persistence, which means that data would survive even when the container is killed or deleted and that once you create the container again the container would still be able to access the databases and other data.
Here is a good article explaining volumes in docker in the context of running mysql in its own container with a volume to hold the data:
https://severalnines.com/database-blog/mysql-docker-containers-understanding-basics
In context of the repo that you linked it seems there is a separate Dockerfile for the database and that you have the option to choose to use either Mariadb or MySQL, see here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mariadb
and here:
https://github.com/gurock/testrail-docker/tree/master/Dockerfiles/testrail_mysql

Related

Is necessary rebuild container to change ports or stop/start is enough?

I have a composer file with four services. I need to OPEN one of them to outside by settings ports.
After changing .yml file, do I need to 'rebuild the container' (docker-compose down/up) or do I just need to stop/start? (docker-compose stop/start)?
Specifically, what I neet to do accesible to outside is a Posgree Server. This is my actual postgres service definition in .yml:
mydb:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I think I just need to change it to:
mydb:
image: postgres:9.4
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I'm worried of loosing data on 'db-data' volume, or connection to the other services, if I use down/up.
Also, there are 3 other services specified in the .yml file. If it is necessary to REBUILD (without loosing data in db-data, of course), I don't want to touch these other containers. In this case, which would be the steps?:
First, rebuild 'mydb' container with 'docker run' (Could you provide me the right command, please?)
Modify the .yml as stated before, just adding the ports
Perform a simple docker-compose stop/start
Could you help me, please?
If you're only changing settings like ports:, it is enough to re-run docker-compose up -d again. Compose will figure out which things are different from the existing containers, and destroy and recreate only those specific containers.
If you're changing a Dockerfile or your application code you may specifically need to docker-compose build your application or use docker-compose up -d --build. But you don't specifically need to rebuild the images if you're only changing runtime settings like ports:.
docker-compose down tears down your entire container stack. You don't need it for routine rebuilds or container updates. You may want to intentionally shut down the container system (and free up host ports, memory, and other resources) and it's useful then.
docker-compose stop leaves the containers in an unusual state of existing but without a running process. You almost never need this. docker-compose start restarts containers in this unusual state, and you also almost never need it.
You have to rebuild it.
For that reason the best practice is to map all the mount points and resources externally, so you can recreate the container (with changed parameters) without any loss of data.
In your scenario I see that you put all the data in an external docker volume, so I think you could recreate it with changed ports in a safe way.

Docker container with Mariadb - Can the container get bricked and lose data

I thought I can use Bound volumes as suggested for my another post
Docker-compose mariadb external volume mapping issue
But I read that containers should be stateless. So it seems my thinking is wrong?
I do not know what catastrophic failures can occur, so is there a possibility that I may lose all data, if the container is bricked? or is there a way to use external storage and recover?
How to manage this situation? I have a schema table which manages migrations, so don't want that table to be new and start from square 1
Question: Should I let the mariadb container on cloud write to wherever it likes? or write to host folder?
My docker -compose snippet
mariadb:
image: mariadb:10.4
...
environment:
..
logging:
...
networks:
- backend
restart: on-failure
volumes:
- maria_volume:/var/lib/mysql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
# Volumes
volumes:
maria_volume:
Another version is (./mariadb instead of maria_volume in volumes section)
networks:
- backend
restart: on-failure
volumes:
- ./mariadb:/var/lib/mysql
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
Your application at large needs to keep data somewhere. Having a relational-database container with storage mounted is fine. In a production environment you could choose to run a non-container database or use a cloud-hosted database if that met your needs better.
I feel like the actual storage mechanisms are pretty robust, both for named volumes and bind-mounted host directories. You probably will not have data-corruption problems in either case. As always, make sure you have backups of your data if it's at all important.
There's not a clear choice between using named volumes and host directories here. Host directories are probably easier to back up and restore; on some platforms named volumes will be faster. In both cases, in normal operation, the data will survive destroying and recreating the container. It'll be a little easier to destroy a named volume's state using docker commands, which depending on your specific use case could point in either direction.
It has occasionally happened to me that Docker's internal state gets corrupted, and when this happens the easiest workaround is to delete the entire /var/lib/docker tree and start over (there is an equivalent "reset" button in the Docker Desktop application). This path would lose named volumes too. On native Linux it's been widely observed that the actual named-volume storage is just a directory, so you might be able to preserve this.

Conditionalizing bind mounted volumes for Docker Compose

Please note: my question mentions MySQL, but it is a Docker/Docker Compose volume management question at heart, and as such, should be answerable by anyone with decent experience in that area, regardless of their familiarity with MySQL.
My understanding is that Dockerized MySQL containers, when defined from inside a Docker Compose file like below, will be ephemeral, meaning they store all data on the container itself (no bind mounts, etc.) and so when the container dies, the data is gone as well:
version: "3.7"
services:
my-service-db:
image: mysql:8
container_name: $MY_SERVICE_DB_HOST
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- $MY_SERVICE_DB_PORT:$MY_SERVICE_DB_PORT
environment:
MYSQL_ROOT_PASSWORD: $MY_SERVICE_DB_ROOT_PASSWORD
MYSQL_DATABASE: my_service_db_$MY_ENV
MYSQL_USER: $MY_SERVICE_DB_APP_USER
MYSQL_PASSWORD: $MY_SERVICE_DB_APP_PASSWORD
other-service-definitions-omitted-for-brevity:
- etc.
To begin with, if that understanding is incorrect, please begin by correcting me! Assuming its more or less correct...
Lets call this Ephemeral Mode.
But by providing a bind mount volume to that service definition, we can specify an external location for where data should be stored, and so the data will persist across service runs (compose ups/downs):
version: "3.7"
services:
my-service-db:
image: mysql:8
container_name: $MY_SERVICE_DB_HOST
command: --default-authentication-plugin=mysql_native_password
restart: always
ports:
- $MY_SERVICE_DB_PORT:$MY_SERVICE_DB_PORT
environment:
MYSQL_ROOT_PASSWORD: $MY_SERVICE_DB_ROOT_PASSWORD
MYSQL_DATABASE: my_service_db_$MY_ENV
MYSQL_USER: $MY_SERVICE_DB_APP_USER
MYSQL_PASSWORD: $MY_SERVICE_DB_APP_PASSWORD
volumes:
- ./my-service-db-data:/var/lib/mysql
other-service-definitions-omitted-for-brevity:
- etc.
Lets call this Persistent Mode.
There are times when I will want to run my Docker Compose file in Ephemeral Mode, and other times, run it in Persistent Mode.
Is it possible to make the volumes definition (inside the Docker Compose file) conditonal somehow? So that sometimes I can run docker-compose up -d <SPECIFY_EPHEMERAL_MODE_SOMEHOW>, and other times I can run docker-compose up -d <SPECIFY_PERSISTENT_MODE_SOMEHOW>?
You can have multiple Compose files that work together, where you have some base file and then other files that extend the definitions in the base file.
Without extra setup, Compose looks for docker-compose.override.yml alongside the main docker-compose.yml. Since the only difference between the "ephemeral" and "persistent" mode is the volumes: declaration, you can have an override file that only contains that:
# docker-compose.override.yml
version: '3.8'
services:
my-service-db: # matches main docker-compose.yml
volumes: # added to base definition
- ./my-service-db-data:/var/lib/mysql
You could also use this technique to move the actual database credentials and port publishing out of the main file into deploy-specific configuration. It's also somewhat common to use it for setups that need to run a known Docker image in production but build it in development, and for setups that overwrite the container's contents with a host directory.
If you want the file to be named something else, you can, but you need to consistently provide a docker-compose -f option or set the COMPOSE_FILE environment variable every time you run Compose.
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml up -d
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml ps
docker-compose -f docker-compose.yml -f docker-compose.persistence.yml logs app
# Slightly easier (Linux syntax):
export COMPOSE_FILE=docker-compose.yml:docker-compose.persistence.yml
docker-compose up -d
Philosophically, your application's data needs to be persisted somewhere. For application containers, a good practice is for them to be totally stateless (they do not mount volumes:) and push all of their data into a database. That means the database needs to persist data, or else it will get lost when the database restarts.
IME it's a little bit unusual to actively want the database to lose data. This would be more interesting if it were straightforward to create a database image with seeded data, but the standard images are built in a way that makes this difficult. In a test environment, still, I could see wanting it.
It's actually possible, and reasonable, to build an application that runs in Docker but uses an external database. Perhaps you're running in a cloud environment, and your cloud provider has a slightly pricey managed database service that provides automatic snapshots and failover, for example; you could configure your production application to use this managed database and keep no data in containers at all.

Docker share environment variables using volumes

How can I share environment variables since the --link feature was deprecated?
The Docker documentation (https://docs.docker.com/network/links/) states
Warning: The --link flag is a legacy feature of Docker. It may
eventually be removed. Unless you absolutely need to continue using
it, we recommend that you use user-defined networks to facilitate
communication between two containers instead of using --link. One
feature that user-defined networks do not support that you can do with
--link is sharing environment variables between containers. However, you can use other mechanisms such as volumes to share environment
variables between containers in a more controlled way.
But how do I share environment variable by using volumes? I did not find anything about environment variables in the volumes section.
The problem that I have is that I want to set a database password as environment variable when I start the container. Some other container loads data into the database and for that needs to connect to it and provide the credentials. So far the loading container discovered the password on its own by reading the environment variable. How do I do that now without --link?
Generally, you do it by explicitly providing the same environment variable to other containers. This is easy if you're using a docker-compose.yml to manage your containers, because then you can do this:
version: 3
services:
database:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
frontend:
image: webserver
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
Then if you set MYSQL_ROOT_PASSWORD in your .env file, the same value will be provided to both the database and frontend container. If you're not using docker-compose, you can still simplify things by using an environment file. Create a file named, e.g., database.env that contains:
MYSQL_ROOT_PASSWORD=secret
Then point your containers at that using docker run --env-file database.env ....
You can't share environment variables using volumes, but you can of course share files. So another option would be to have your database container write a file containing the password to a shared volume, and then read that in your other containers.

How can I link an image created volume with a docker-compose specified named volume?

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Resources