Docker compose - access data from container A in container B - docker

Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?

I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.

Related

docker-compose wait on other service before build

There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.

How to handle updating docker-compose based application in production

I have a docker-compose based application which I am deploying to production server.
Two of its containers share a directories contents using a data volume like so:
...
services:
service1:
volumes:
- server-files:/var/www
service2:
volumes:
- server-files:/var/www
db:
volumes:
- db-persistent:/var/lib/mysql
volumes:
server-files:
db-persistent:
The service1's /var/www is populated when its Dockerfile is built.
My understanding is that if I make changes to code stored in /var/ww when I rebuild service1
its updates will be hidden by the existing server-files volume.
What is the correct way to update this deployment so that changes propagate with minimal
downtime and without deleting other volumes?
Edit
Just to clarify my current deploy process works as follows:
Update code locally and and commit/push changes to Github
Pull changes on server
Run docker-compose build to rebuild any changed containers
Run docker-compose up -d to reload any updated containers
The issue is that changed code within /var/www is hidden by the already existing named volume server-files. My question is what is the best way to handle this update?
I ended up handling this by managing the databases volume db-persistent outside ofdocker-compose. Before running docker-compose up I created the volume manually by runningdocker volume create db-persistent and in docker-compose.yml I marked the volume as external with the following configuration:
volumes:
db-persistent:
external: true
My deploy process now looks as follows:
Pull changes from Github
Run docker-compose build to automatically build any changed containers.
Shutdown existing application and remove volumes by running docker-compose down -v
Run docker-compose up to start application again.
In this new setup running docker-compose down -v only removes the server-files volume leaving the db-persistent volume untouched.
First of all, docker-compose isn't meant for production deployment. This issue illustrates one of the reasons why: no automatic rolling upgrades. Creating a single node swarm would make your life easier. To deploy, all you would have to do is run docker stack deploy -f docker-compose.yml. However, you might have to tweak your compose file and do some initial setup.
Second of all, you are misunderstanding how docker is meant to be used. Creating a volume binding for your application code is only a shortcut that you do in development so that you don't have to rebuild your image every time you change your code. When you deploy your application however, you build a production image of your application that contains all the code needed to run.
Once this production image is built, you push it up to an image repository (probably docker hub). Your production server pulls the image from that repository, and uses it to create a container that runs your application.
IF you're pulling your application code from your production server, then why use Docker at all? In that scenario, it's just making your life harder and adding extra steps when you could just run everything directly on your host VM and make a simple script to stop your apps, pull your code, and restart your apps.

Docker compose command is failing with conflict

I am bringing up my project dependencies using docker-compose. So far this used to work
docker-compose up -d --no-recreate;
However today I tried running the project again after couple of weeks and I was greeted with error message
Creating my-postgres ... error
ERROR: for my-postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: for postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: Encountered errors while bringing up the project.
My docker-compose.yml file is
postgres:
container_name: my-postgres
image: postgres:latest
ports:
- "15432:5432"
Docker version is
Docker version 19.03.1, build 74b1e89
Docker compose version is
docker-compose version 1.24.1, build 4667896b
Intended behavior of this call is to:
make the container if it does not exist
start the container if it exists
just chill and do nothing if the container is already started
Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.
You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.
(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)
In my case I moved the project in an other directory.
When I tryed to run docker-compose up it failed because of some conflicts.
With command docker system prune I resolved them.
It's caused by being in a different directory than when you last ran docker-compose up. One option is to change back to the original directory. Or if you've configured it as a systemd service you can use systemctl.
Well...the error message seems pretty straightforward to me...
The container name "/my-postgres" is already in use by container
If you just want to restart where you left, you should use docker-compose start.
Otherwise, just clean up your workspace before running it :
docker-compose down
docker-compose up -d
Remove --no-recreate flag from your docker-compose command. And execute the command again.
$docker-compose up -d
--no-recreate is using for preventing accedental updates.
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers. To prevent Compose from picking up changes, use the --no-recreate flag.
official docker docs.Link
I had similar issue
dcdown --remove-orphans
That worked for me.

Docker - Mount a volume from a container to an other (equivalent volumes_from) in docker-compose 3

I've two containers : nginx & angular. The angular container contains the code and is automatically pulled from the registry when there is a new version (with watchtower).
I set up a Shared Volume between angular & nginx to share the code from angular to nginx.
### Angular #########################################
angular:
image: registry.gitlab.com/***/***:staging
networks:
- frontend
- backend
volumes:
- client:/var/www/client
### NGINX Server #########################################
nginx:
image: registry.gitlab.com/***/***/***:staging
volumes:
- client:/var/www/client
depends_on:
- angular
networks:
- frontend
- backend
volumes:
client:
networks:
backend:
frontend:
When I build & run for the first time the environment, everything works.
The problem is when there is a new version of the client, the image is pulled, the container is re-built and the new code version is inside the angular container, but in the nginx container it still the old code version of the client.
The shared volumes does not let me do what i want because we can not specify who is the host, is it possible to mount a volumes from a container to an other ?
Thanks in advance.
EDIT
The angular container is only here to serve the files. We could rsync the built application to the server on the host machine then mouting the volume to the container (host -> guest) but it would go against our CI process : build app->build image->push to registry->watchtower pull new image
Docker volumes are not intended to share code, and I'd suggest reconsidering this workflow.
The first time you launch a container with an empty volume, and only the first time and only if the volume is already empty, Docker will populate it with contents from the container. Since volumes are intended to hold data, and the application is likely to change the data that will be persisted, Docker doesn't overwrite the application data if the container is restarted; whatever was in the volume directory remains unchanged.
In your setup that means this happens:
You start the angular container the first time, and since the client named volume is empty, Docker copies content into it.
You start the nginx container.
You delete and restart the angular container; but since the client named volume is empty, Docker leaves the old content there.
The nginx container still sees the old content.
For a typical browser application, you don't actually need a "program" running: once you've run through a Typescript/Webpack/... sequence, the output is a collection of totally static files. In the case of Angular, there is an Ahead-of-Time compiler that produces these static files. The sequence I'd recommend here is:
Check out your application source tree locally.
Develop your browser application in isolation, using developer-oriented tools like ng serve or npm start. Since this is all running locally, you don't need to fight with anything Docker-specific (filesystem mappings, permissions, port mappings, ...); it is a totally normal Javascript development sequence. The system components you need for this are just Node; it is strictly easier than installing and configuring Docker.
Compile your application to static files with the Angular AOT compiler or Webpack or npm build.
Publish those static files to a CDN; or bind-mount them into an nginx container; or maybe build them into a custom image.
In the last case you wouldn't use a named Docker volume. Instead you'd mount the local filesystem into the container. A complete docker-compose.yml file for this case could look like:
version: '3'
services:
nginx:
image: registry.gitlab.com/***/***/***:staging
volumes:
- ./client:/var/www/client
ports:
- '8000:80'
From your comment:
There is no program running for the client, the CI compile the app and build the custom Image which COPY the application files in /var/www/client. Then watchtower pull this new image and restart the container. The container only run in daemon with (tail -f /dev/null & wait).
Looking at this from a high level, I don't see any need to have two containers or volumes at all. Simply build your application with a multi-stage build that generates an nginx image with the needed content:
FROM your_angular_base AS build
COPY src /src
RUN steps to compile your code
FROM nginx_base as release
...
COPY --from=build /var/www/client/ /var/www/client/
...
Then your compose file is stripped down to just:
...
### NGINX Server #########################################
nginx:
image: registry.gitlab.com/***/***/***:staging
networks:
- frontend
- backend
networks:
backend:
frontend:
If you do find yourself in a situation where a volume is needed to be shared between two running containers, and the volume needs to be updated with each deploy of one of the images, then the best place for that is an entrypoint script that copies files from one location into the volume. I have an example of this in my docker-base with the save-volume and load-volume scripts.

Recompiling VueJS app before docker-compose up

I want to deploy vueJS app inside a docker nginx container but before that container runs the vueJS source has to be compiled via npm run build I want to compilation to run in a container and then exit leaving only the compiled result for the nginx container.
Every time docker-compose up is run the vueJS app has to be recompiled as there is a .env file on the host OS that has to be volume mounted and the variables in here could be updated.
The ideal way I think would be some way of creating stages for docker compose like in gitlab ci so there would be a build stage and when that's finished the nginx container starts. But when I looked this up I couldn't see a way to do this.
What would be the best way to compile my vueJS app every time docker-compose up is run?
If you're already building your Vue.js app into a container (with a Dockerfile), you can make use of the build directive in your docker-compose.yml file. That way, you can use docker-compose build to create containers manually, or use run --build to build containers before they launch.
For example, this Compose file defines a service using a container build file, instead of a prebuilt image:
version: '3'
services:
vueapp:
build: ./my_app # There should be a Dockerfile in this directory
That means I can both build containers and run services separately:
docker-compose build
docker-compose up
Or, I can use the build-before-run option:
# Build containers, and recreate if necessary (build cache will be used)
docker-compose up --build
If your .env file changes (and containers don't pick up changes on restart), you might consider defining them in container build file. Otherwise, consider putting the .env file into a directory (and mount the directory, not the file, because some editors will use a swap file and change the inode - and this breaks the mount). If you mount a directory and change files within the directory, the changes will reflect in the container, because the parent directory's inode didn't change.
I ended up having an nginx container that reads the files from a volume mount and a container that builds the app and places the files in the same volume mount. While the app is compiling, nginx reads the old version and when the compilation is finished the files get replaced with the new ones.

Resources