Deploying docker-compose containers - docker

I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?

The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.

Related

docker-compose wait on other service before build

There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.

Why does docker-compose launch different image when run with the '-p' flag

We have a setup with our Jenkins server running within a Docker container which I have taken over from a colleague who has left.
I am seeing some behaviour which I do not understand and have not been able to work out what is going on from the documentation.
My folder structure looks like this:
└── Master
├── docker-compose.yml
└── jenkins-master
└── Dockerfile
My docker-compose.yaml file looks like this (this is just a snippet of the relevant part):
version: '3'
services:
master:
build: ./jenkins-master
I have updated the version of the base Jenkins image in jenkins-master/Dockerfile and then rebuilt using docker-compose build.
This succeeds and results in an image called master_master
If I run docker images I see this new image as well as a previous image:
REPOSITORY TAG IMAGE ID CREATED SIZE
master_master latest <id1> 16 hours ago 704MB
jenkins_master latest <id2> 10 months ago 707MB
As I understand it, the name master_master is as a result of the base folder name (i.e. Master) and the service name of master in the docker-compose.yaml file.
I don't know how the existing image ended up with the name jenkins_master. Would the folder name have had to be Jenkins rather than Master, or is there another way that would have resulted in this name?
When I run docker-compose up -d it uses the master_master image to launch a container (called master_master_1).
When I run docker-compose -p jenkins up -d it uses the jenkins_master image to launch a container (called jenkins_master_1).
Apart from the different container names, the resultant running containers are different as I can see that the Jenkins versions are different (as per the change I made in the Dockerfile).
I do not change the docker-compose file at all between running these 2 commands and yet different images are run.
The documentation that I have found for specifying the -p (--project-name) flag states:
Sets the project name. This value is prepended along with the service
name to the container on start up. For example, if your project name
is myapp and it includes two services db and web, then Compose
starts containers named myapp_db_1 and myapp_web_1 respectively.
Setting this is optional. If you do not set this, the
COMPOSE_PROJECT_NAME defaults to the basename of the project
directory.
There is nothing that leads me to believe that the -p flag will result in a different image being run.
So what is going on here?
How does docker-compose choose which image to run?
Is this happening due to the names of the images master_master vs jenkins_master?
If you're going to use the docker-compose -p option, you need to use it with every docker-compose command, including docker-compose build.
If your docker-compose.yml file doesn't specify an image:, Compose constructs an image name from the current project name and the Compose service name. The project name and Docker object metadata are the only way it has to remember anything. So what's happening here is that the plain docker-compose build builds the image for the master service in the master project, but then docker-compose -p jenkins up looks for the master service in the jenkins project, and finds the other image.
docker-compose -p jenkins build
docker-compose -p jenkins up -d
It may or may not be easier to set the COMPOSE_PROJECT_NAME environment variable, possibly putting this in a .env file. In a Jenkins context, I also might consider using Jenkins's Docker integration to build (and push) the image, and only referring to image: in the docker-compose.yml file.
Add image option in the docker-compose.yml file. It will create the container with a specified docker image.
build: ./jenkins-master
image: dockerimage_name:tag

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

Can I deploy a service to a docker swarm via a docker-compose.yml file that references the image by its ID?

I create a docker-compose.yml file that references docker images by ID. When I try to deploy the compose file to a swarm, I get the error "No such image: 2e69080faee3:latest" repeatedly.
The image does exist locally. In my scenario, it was loaded from a tar file that was provided to me by an upstream process.
Here is the docker-compose.yml:
version: '3.4'
services:
registry:
image: 2e69080faee3
And here is the output from docker images that includes this image on the local node:
I'm not sure if I should expect this situation to work or not? I am struggling to find documentation that says this should NOT work, but there's nothing that explicitly says that it SHOULD work, either.
According to Documentation, yes, in compose version 3.2.

Docker compose - access data from container A in container B

Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.

Resources