docker-compose: (Re)Build Dockerfile from inside docker-compose file? - docker

Had a hard time googling this question as most suggestions is how to do it through command line which I sadly do not have access to in this environment. Is it possible to do the equivalent of
docker-compose up --build --force-recreate
From inside a docker-compose file?

The environment you describe sounds similar to Kubernetes in a couple of ways, except that it's driven by a Docker Compose YAML file. The strategies that work for Kubernetes will work here too. In Compose there's no way to put "actions" in a YAML file, or flag that a service always needs to be rebuilt or recreated. It sounds like the only thing it's possible to do in your environment is run docker-compose up -d.
The trick that I'd use here is to change the image: for a container whenever you have a change you need to deploy. That means the image tag needs to be something unique; it could be a date stamp or source control ID.
version: '3.8'
services:
myapp:
image: registry.example.com/myapp:20220209
Now when you have a change to your application, you (or your CI system) need to build a new copy of it, offline, and docker push it to a registry. Then change this image: value, and push the updated file to the deployment system. Compose will see that it's only running version 20220208 from yesterday and recreate that specific container.
If you have the ability to specify environment variables, you can use that in the Compose setup
image: registry.example.com/myapp:${MYAPP_TAG:-latest}
to avoid having to physically modify the file.

Related

docker-compose wait on other service before build

There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.

docker-compose: start service from same docker-compose file using env vars to alter container name

We are using docker in a team of developers. We have on project all devs work on. Since we do not want to have one docker-compose.yml for each developer we use environment variables to pass the username to docker-compose. Inside docker-compose we have something like this
services:
myservice:
image: myimage
container_name: ${user}_myservice
This used to work very well for us but has stopped working lately. Assume there are two users. The first user runs docker-compose up myservice launching ${user1}_myservice. When the second user issues the same command, the second user will kill the container running under ${user1}_myservice and start ${user2}_myservice.
Somehow it seems that docker services are now linked directly and not only through the container_name variable as before.
We recently upgraded docker to Docker version 17.09.0-ce, build afdb6d4. I attribute the change to the "new" docker version. I have tried downgrading docker-compose to previous versions and it seems this is not related to docker-compose.
UPDATE
Inspired by the answer below we found the following workaround:
We set the env variable COMPOSE_PROJECT_NAME to be the username on login of the user on the host. Then we extend the service name in our docker-compose.yml files to be <proj>_<service>, thereby avoiding any conflicts between identical service names across projects.
Rather than mucking about with variables in docker-compose.yml, it's probably easier just to make use of the --project-name (-p) option to docker-compose.
Normally, docker-compose derives the project name from the name of the directory that contains your docker-compose.yaml file. So if two people try to start an application from a directory named myapp, they will end up with a conflict because both instances will attempt to use the same name.
However, if they were to run instead:
docker-compose --project-name ${USER}_myapp ...
Then docker-compose for each user would use different project names (like alice_myapp and bob_myapp) and there would be no conflict.
If people get tired of using the -p option, they could create a .env like this:
COMPOSE_PROJECT_NAME=alice_myapp
And this would have the same effect as specifying -p alice_myapp on the command line.

Docker-compose specify config file

I'm new to docker and now I want to use docker-compose. I would like to provide the docker-compose.yml script the config file with host/port, maybe credentials and other cassandra configuration.
DockerHub link.
How to specify it in the compose script:
cassandra:
image: bitnami/cassandra:latest
You can do it using Docker compose environment variables(https://docs.docker.com/compose/environment-variables/#substituting-environment-variables-in-compose-files). You can also specify a separate environment file with environment variables.
Apparently, including a file outside of the scope of the image build, such as a config file, has been a major point of discussion (and perhaps some tension) between the community and the folks at Docker. A lot of the developers in that linked issue recommended adding config files in a folder as a volume (unfortunately, you cannot add an individual file).
It would look something like:
volumes:
- ./config_folder:./config_folder
Where config_folder would be at the root of your project, at the same level as the Dockerfile.
If Docker compose environment cannot solved your problem, you should create a new image from the original image with COPY to override the file.
Dockerfile:
From orginal_image:xxx
COPY local_conffile in_image_dir/conf_dir/

Deploying docker-compose containers

I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?
The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.

Docker compose - access data from container A in container B

Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.

Resources