In Docker compose, is it possible for a container A to wait for a container B to finish (aka, stop running) before starting?
I have 3 containers in my docker compose:
Container A is a MySQL database
Container B is a Flyway container that has some SQL migrations on a certain schema1. After running the migrations, it stops.
Container C is a Flyway container that has some SQL migrations on a certain schema2 and adds some data to both schema1 and schema2. It does also stop after running the migrations.
I need container C to wait for B to finish otherwise the migrations are going to fail.
Since this was urgent, I ended up ditching the whole idea of knowing if a certain container had successfully ended, and I just added a restart policy that triggers on-failure so this way if a migration fails it can retry.
Also, since ContainerC's migrations were just repeatable data migrations I added -baselineOnMigrate="false" to the Flyway command that was being executed there, so that way, if the migrations that ContainerB had to do were not finished yet, C's migrations would fail without polluting flyway's history therefore they could retry without issues.
Related
I have a Docker container with a init script CMD ["init_server.sh"]
which is orchestrated by docker-compose.
Does running docker-compose restart re-run the init script,
or will only running docker-compose down followed by docker-compose up
trigger the script to be run again?
I imagine whatever the answer to this will apply to docker restart as well.
Am I correct?
A Docker container only runs one process, defined by the "entrypoint" and "command" settings (typically from a Dockerfile, you can override them in a docker-compose.yml). Whatever that process does, it will do every time the container starts.
In terms of Docker commands, the Compose commands you show aren't different from their underlying plain-Docker variants. restart is just stop followed by start, so it will re-run the main container process in its existing container with the existing (possibly modified) container filesystem. If you do a docker rm in between these (or docker-compose down) the process starts in a clean container based on the image.
It's typical for an initialization script to check if the initialization it requires has already been done. For things like the standard Docker Hub database images, this works by checking if the data directory is totally empty; initialization only happens on the very first startup. An init script that runs something like database migrations will generally keep track of which migrations have already been done and won't repeat work.
I have two Dockerfiles, one for a database, and one for a web server. The web server's Dockerfile has a RUN statement which requires a connection to the database container. The web server is unable to resolve the database's IP then errors out. But if I comment out the RUN line, then manually run it inside the container, it successfully resolves the database. Should the web server be able to resolve the database during its build process?
# Web server
FROM tomcat:9.0.26-jdk13-openjdk-oracle
# The database container cannot be resolved when myscript runs. "Unable to connect to the database." is thrown.
RUN myscript
CMD catalina.sh run
# But if I comment out the RUN line then connect to web server container and run myscript, the database container is resolved
docker exec ... bash
# This works
./myscript
I ran into the same problem on database migrations and NuGet pushes. You may want to run something similar on your db like migrations, initial/test data and so on. It could be solved in two ways:
Move your DB operations to the ENTRYPOINT so that they're executed at runtime (where the DB container is up and reachable).
Build your image using docker build instead of something like docker-compose up --build because docker build has a switch called --network. So you could create a network in your compose file, bring the DB up with docker-compose up -d db-container and then access them during the build with docker build --network db-container-network -t your-image .
I'd prefer #1 over #2 if possible because
it's simpler: the network is only present in docker-compose file, not on multiple places
you can specify relations usind depends_on and make sure that they're respected properly without taking manually care of it
But depending on the action you want to execute, you need to take care that it's not executed multiple times because it's running on every start and not just during build (when the cache got purged by file changes).
However, I'd consider this as best practice anyway when running such automated DB operations to expect that they may executed more than one and should create the expected result anyway (e.g. by checking if the migration version or change is present).
I'm using the postgres:latest image, and creating backups using the following command
pg_dump postgres -U postgres > /docker-entrypoint-initdb.d/backups/redmine-$(date +%Y-%m-%d-%H-%M).sql
and it's running periodically using crontab
*/30 * * * * /docker-entrypoint-initdb.d/backup.sh
However, on occasion I might need to run
docker-compose down/up
for whatever reason
The problem
I always need to manually run /etc/init.d/cron start whenever I restart the container. This is a bit of a problem because it's difficult to remember to do, and if I (or anyone else) forgets this, backups wont be made
According to the documentation, scripts ending with *.sql and *.sh inside the /docker-entrypoint-initdb.d/ are run on container startup (and they do)
However, if I put /etc/init.d/cron start inside a executable .sh file, the other commands inside that file are executed and I've verified that. But the cron service does not start, probably because the /etc/init.d/cron start inside the executable file does not execute successfully
I would appreciate any suggestion for a solution
You will want to keep your docker containers as independent of other services as possible, I would recommend that you instead of running the cronjob in the container do it on the host, that way it will run even if the container is restarted (weather automatically or manually).
If you really really feel the need for it, I would build a new image with the postgres image as base, and add the cron right from there, that way it is in the container already from start, without any extra scripts needed. Or even create another image just to invoke the cronjob and connect via the docker network.
Expanding on #Jite's answer, you could run pg_dump remotely in a different container using the --host option
This image, for example, provides a minimal environment with psql client and dump/restore utilities
Let's assume I have a Python program running inside a docker container
import time
counter = 0
while True:
counter += 1
print counter
time.sleep(1)
What happens if I do a commit on that running container, and then use that new image to run a new container?
The docs state that a running container is paused (cgroups freezer), and after commiting it gets unpaused. In what state is the image in? SIGKILL? I assume that the python program won't be running anymore when doing a docker start on that image, correct?
I'm asking because I have a couple of Java servers (Atlassian) running in the container, so I wonder if I'm doing daily backups via commit on that container, and I then "restore" (docker run ... backup/20160118) one of the backups, in what state will the servers be in?
Docker commit only commits the filesystem changes of a container, so any file that has been added, removed, or modified on the filesystem since the container was started.
Note that any volume (--volume or VOLUME inside the dockerfile) is not part of the container's filesystem, so won't be committed.
In-memory state: "Checkpoint and Restore"
Committing a container, including it's current (in-memory) state, is a lot more complex. This process is called "checkpoint and restore". You can find more information about this on https://criu.org. There's currently a pull request to add basic support for checkpoint and restore to Docker; https://github.com/docker/docker/pull/13602, but that feature does not yet support "migrating" such containers to a different machine.
Is it possible to create a docker container, run a command inside it, then revert the changes to the container filesystem resulting from that command and run another command?
The motivation is that I wish to run a large number of short-lived programs in a consistent environment, and I'm hoping to avoid the cost of creating/destroying a separate container for each one.
I'm aware that it is possible to use docker commit and docker history to create a new container from a previous snapshot of an existing container, but with this method I'd still have to create a new container each time I want to rollback. My goal is to avoid that step by rolling back the filesystem changes for an already-running container.
From what I understand about aufs it seems this should be possible in principle, but I'm not sure whether it's supported by the docker daemon.
You should look at the 6 containers related to nixos at https://hub.docker.com/search/?q=nixos&page=1&isAutomated=0&isOfficial=0&starCount=0&pullCount=0 as nixos allows you to do rollback production and such things.
have also a look at the 22 containers related to ubuntu snappy
https://hub.docker.com/search/?q=snappy&page=1&isAutomated=0&isOfficial=0&starCount=0&pullCount=0
I am not aware of a docker way to do this.