There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.
Related
I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.
I have a docker-compose based application which I am deploying to production server.
Two of its containers share a directories contents using a data volume like so:
...
services:
service1:
volumes:
- server-files:/var/www
service2:
volumes:
- server-files:/var/www
db:
volumes:
- db-persistent:/var/lib/mysql
volumes:
server-files:
db-persistent:
The service1's /var/www is populated when its Dockerfile is built.
My understanding is that if I make changes to code stored in /var/ww when I rebuild service1
its updates will be hidden by the existing server-files volume.
What is the correct way to update this deployment so that changes propagate with minimal
downtime and without deleting other volumes?
Edit
Just to clarify my current deploy process works as follows:
Update code locally and and commit/push changes to Github
Pull changes on server
Run docker-compose build to rebuild any changed containers
Run docker-compose up -d to reload any updated containers
The issue is that changed code within /var/www is hidden by the already existing named volume server-files. My question is what is the best way to handle this update?
I ended up handling this by managing the databases volume db-persistent outside ofdocker-compose. Before running docker-compose up I created the volume manually by runningdocker volume create db-persistent and in docker-compose.yml I marked the volume as external with the following configuration:
volumes:
db-persistent:
external: true
My deploy process now looks as follows:
Pull changes from Github
Run docker-compose build to automatically build any changed containers.
Shutdown existing application and remove volumes by running docker-compose down -v
Run docker-compose up to start application again.
In this new setup running docker-compose down -v only removes the server-files volume leaving the db-persistent volume untouched.
First of all, docker-compose isn't meant for production deployment. This issue illustrates one of the reasons why: no automatic rolling upgrades. Creating a single node swarm would make your life easier. To deploy, all you would have to do is run docker stack deploy -f docker-compose.yml. However, you might have to tweak your compose file and do some initial setup.
Second of all, you are misunderstanding how docker is meant to be used. Creating a volume binding for your application code is only a shortcut that you do in development so that you don't have to rebuild your image every time you change your code. When you deploy your application however, you build a production image of your application that contains all the code needed to run.
Once this production image is built, you push it up to an image repository (probably docker hub). Your production server pulls the image from that repository, and uses it to create a container that runs your application.
IF you're pulling your application code from your production server, then why use Docker at all? In that scenario, it's just making your life harder and adding extra steps when you could just run everything directly on your host VM and make a simple script to stop your apps, pull your code, and restart your apps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What's the best practice to use Dockerfile with docker-compose.yml? And how to do CI/CD with Jenkins?
I have 2 microservices and one Postgres database. I create docker-compose.yml file:
version: '3.1'
services:
myflashcards-service-dictionary:
image: myflashcards-service-dictionary
db:
image: postgres
restart: always
ports:
- 5434:5432
The question is what to write in "image:" section? Should I first run
mvn clean install -DskipTests dockerfile:build? But what with the image name?
I'd like to know how to automate the whole CI/CD.
I have Dockerfile:
FROM openjdk:8-jdk-alpine
ADD target/myflashcards-service-dictionary.jar myflashcards-service-dictionary.jar
ENTRYPOINT exec java -Djava.security.egd=file:/dev/./urandom -Dspring.profiles.active=$profile -jar /myflashcards-service-dictionary.jar
EXPOSE 8092
I have also docker-compose.yml but how docker-compose.yml know which image should be used?
Would you briefly outline the main process how to deploy my microservices app to the server?
How to use Dockerfile and docker-compose? When are these files necessary?
Do we need Dockerfile only to create an image in Docker Hub?
Your Dockerfile is similar to the Maven POM file; its a set of instructions for Docker to create an image with (docker build image-name .). Dockerfile is a must, you cannot use Docker without a one. It's like trying to use a Maven without a POM file.
Name of the image is what you give for the Maven plugin (<repository>spotify/foobar</repository>) or docker build <image-name> . and this can be anything you like.
Docker Compose is a tool that can be used to manage a service, that can be comprised of multiple micro-services. It allows users to create an orchestration plan that can be run later. This allows users to script complex information of the Docker environment like volumes, networking, restart policies and many more.
Docker Compose file is an optional one and can be replaced with a different alternative like HashiCorp Nomad But Docker Compose is one of the easiest to use, stick to this if you're new to Docker.
Docker Compose is able to build and use an image at runtime (useful for development) or run an image that already exists in a repository (production recommendation). Full Docker Compose documentation should explain how to write a one.
Build at runtime
version: '3.1'
services:
myflashcards-service-dictionary:
build: path/to/folder/of/Dockerfile
db:
image: postgres
restart: always
ports:
- 5434:5432
Run a pre-existing image
version: '3.1'
services:
myflashcards-service-dictionary:
image: myflashcards-service-dictionary
db:
image: postgres
restart: always
ports:
- 5434:5432
Dockerfile can be used without a Docker Compose, the only difference is that it's not practical to use in production since it considered as a single service deployment. As far as I'm aware, it cannot be used with Docker Swarm
As far as CI/CD goes, you can use a Maven plugin like Dockerfile Maven Plugin. You can find the docs here. This image then can be pushed to a repository like Docker Hub, AWS ECR or even a self-hosted one (I wouldn't recommend this unless you're comfortable with setting up highly secure networks especially if it's not an internal network).
Dockerfile is a spec to build a container image and is used by Docker:
docker build --tag=${REPO}/${IMAGE}:${TAG} --file=./Dockerfile .
The default ${REPO} is docker.io aka DockerHub and is assumed if null|omitted.
You only need Dockerfile for images that you wish to build. For existing images, these are docker pull ... from container image registries (e.g. DockerHUb, Google Container Registry, Quay). Pulls are often performed implicitly by e.g. docker-compose up.
Once built, you may reference this image from a docker-compose.yaml file.
Docker Compose looks in your local image cache (docker image ls) for images. If it doesn't find it, (with your file), it will try to pull myflashcards-service-dictionary:latest and postgres:latest from the default repo (aka dockerhub).
It's possible to include a build spec in docker-compose.yaml too in which case, if not found locally, Docker Compose will try to docker build ... the images for you.
Docker Compose is one tool that permits multiple containers to be configured, run, networked etc. Another, increasingly popular tool for orchestrating containers is Kubernetes.
There's lots of good documentation online for Docker, Docker-Compose and developing CI/CD pipelines.
I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?
The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.
Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.