I have a docker-compose based application which I am deploying to production server.
Two of its containers share a directories contents using a data volume like so:
...
services:
service1:
volumes:
- server-files:/var/www
service2:
volumes:
- server-files:/var/www
db:
volumes:
- db-persistent:/var/lib/mysql
volumes:
server-files:
db-persistent:
The service1's /var/www is populated when its Dockerfile is built.
My understanding is that if I make changes to code stored in /var/ww when I rebuild service1
its updates will be hidden by the existing server-files volume.
What is the correct way to update this deployment so that changes propagate with minimal
downtime and without deleting other volumes?
Edit
Just to clarify my current deploy process works as follows:
Update code locally and and commit/push changes to Github
Pull changes on server
Run docker-compose build to rebuild any changed containers
Run docker-compose up -d to reload any updated containers
The issue is that changed code within /var/www is hidden by the already existing named volume server-files. My question is what is the best way to handle this update?
I ended up handling this by managing the databases volume db-persistent outside ofdocker-compose. Before running docker-compose up I created the volume manually by runningdocker volume create db-persistent and in docker-compose.yml I marked the volume as external with the following configuration:
volumes:
db-persistent:
external: true
My deploy process now looks as follows:
Pull changes from Github
Run docker-compose build to automatically build any changed containers.
Shutdown existing application and remove volumes by running docker-compose down -v
Run docker-compose up to start application again.
In this new setup running docker-compose down -v only removes the server-files volume leaving the db-persistent volume untouched.
First of all, docker-compose isn't meant for production deployment. This issue illustrates one of the reasons why: no automatic rolling upgrades. Creating a single node swarm would make your life easier. To deploy, all you would have to do is run docker stack deploy -f docker-compose.yml. However, you might have to tweak your compose file and do some initial setup.
Second of all, you are misunderstanding how docker is meant to be used. Creating a volume binding for your application code is only a shortcut that you do in development so that you don't have to rebuild your image every time you change your code. When you deploy your application however, you build a production image of your application that contains all the code needed to run.
Once this production image is built, you push it up to an image repository (probably docker hub). Your production server pulls the image from that repository, and uses it to create a container that runs your application.
IF you're pulling your application code from your production server, then why use Docker at all? In that scenario, it's just making your life harder and adding extra steps when you could just run everything directly on your host VM and make a simple script to stop your apps, pull your code, and restart your apps.
Related
There are a few approaches to fix container startup order in docker-compose, e.g.
depends_on
docker-compose-wait
Docker Compose wait for container X before starting Y (Asked 7 years, 6 months ago, Modified 7 months ago, Viewed 483k times)
...
However, if one of the services in a docker-compose file includes a build directive, it seems docker-compose will try to build the image first (ignoring depends_on basically - or interpreting depends_on as start dependency, not build dependency).
Is it possible for a build directive to specify that it needs another service to be up, before starting the build process?
Minimal Example:
version: "3.5"
services:
web:
build: # this will run before postgres is up
context: .
dockerfile: Dockerfile.setup # needs postgres to be up
depends_on:
- postgres
...
postgres:
image: postgres:10
...
Notwithstanding the general advice that programs should be written in a way that handles the unavailability of services (at least for some time) gracefully, are there any ways to allow builds to start only when other containers are up?
Some other related questions:
multi-stage build in docker compose?
Update/Solution: Solved the underlying problem by pushing all the (database) setup required to the CMD directive of a bootstrap container:
FROM undertest-base:latest
...
CMD ./wait && ./bootstrap.sh
where wait waits for postgres and bootstrap.sh contains the code for setting up the postgres database with fixtures so the over system becomes fully testable after that script.
With that, setting up an ephemeral test environment with database setup becomes a simple docker-compose up again.
There is no option for this in Compose, and also it won't really work.
The output of an image build is a self-contained immutable image. You can do things like docker push an image to a registry, and Docker's layer cache will avoid rebuilding an image that it's already built. So in this hypothetical setup, if you could access the database in a Dockerfile, but you ran
docker-compose build
docker-compose down -v
docker-compose up -d --build
the down -v step will remove the storage the database uses. While the up --build option will cause the image to be rebuilt, the build sequence will skip all of the steps and produce the same image as originally, and whatever changes you might have made to the database won't have happened.
At a more mechanical layer, the build sequence doesn't use the Compose-provided network, so you also wouldn't be able to connect to the database container.
There are occasional use cases where a dependency in build: would be handy, in particular if you're trying to build a base image that other images in your Compose setup share. But neither the stable Compose file v3 build: block nor the less-widely-supported Compose specification build: supports any notion of an image build depending on anything else.
I'm building a NodeJS application on Docker in Swarm mode (single node). I'm using bind mount volume for NodeJS source code. Everything runs perfectly and I can see the output in localhost from NodeJS and Express, but when I change something in NodeJS code (which is in a bind mount volume), nothing changes. I have to restart my service to observe the changes. Earlier when I was working with Docker Compose only, it never happened, but now when I have switched to Swarm, I'm experiencing problems.
I'm using Docker 18 with Visual Studio Code 1.39 on macOS 10.14.6
Dockerfile
FROM node:12-alpine
WORKDIR /node-dir
COPY package*.json ./
RUN npm install
docker-compose.yml file
# Docker-compose.yml
version: '3.7'
services:
node-service:
image: node-img:1.0
ports:
- 80:3000
working_dir: "/node-dir"
volumes:
- ./node-dir/source:/node-dir/source
networks:
- ness-net
command: npm start
networks:
ness-net:
I also read that it could be due to the inodes, most editors when saving the file breaks the link. But it was working correctly under docker-compose with Visual Studio Code, its behaviour is changed only in Docker Swarm.
Update: I served a static html file using Nginx with bind mount, and I can easily change that file using VS Code and it's reflected. Its only NodeJS which is not detecting changes in a file.
If your volume mapping is correct, the source code changes should reach your node.js app container.
You can verify it by inspecting the source code inside the container after you make a change on docker host.
I'm currently in development mode, and I have to test the source code
repeatedly so I want to use bind mounts to make development and
testing easier.
However, your source code change won't be effective until node process inside the container reloads and picks up the changes.
In order to achieve this you have to use nodemon. Nodemon will pick the changes in the source code and reload node process along with the changes.
Another, longer alternative would be building new docker image and then updating your app using: docker service update --image=...
You can also use tilt to automate all of the above actions.
I am bringing up my project dependencies using docker-compose. So far this used to work
docker-compose up -d --no-recreate;
However today I tried running the project again after couple of weeks and I was greeted with error message
Creating my-postgres ... error
ERROR: for my-postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: for postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: Encountered errors while bringing up the project.
My docker-compose.yml file is
postgres:
container_name: my-postgres
image: postgres:latest
ports:
- "15432:5432"
Docker version is
Docker version 19.03.1, build 74b1e89
Docker compose version is
docker-compose version 1.24.1, build 4667896b
Intended behavior of this call is to:
make the container if it does not exist
start the container if it exists
just chill and do nothing if the container is already started
Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.
You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.
(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)
In my case I moved the project in an other directory.
When I tryed to run docker-compose up it failed because of some conflicts.
With command docker system prune I resolved them.
It's caused by being in a different directory than when you last ran docker-compose up. One option is to change back to the original directory. Or if you've configured it as a systemd service you can use systemctl.
Well...the error message seems pretty straightforward to me...
The container name "/my-postgres" is already in use by container
If you just want to restart where you left, you should use docker-compose start.
Otherwise, just clean up your workspace before running it :
docker-compose down
docker-compose up -d
Remove --no-recreate flag from your docker-compose command. And execute the command again.
$docker-compose up -d
--no-recreate is using for preventing accedental updates.
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers. To prevent Compose from picking up changes, use the --no-recreate flag.
official docker docs.Link
I had similar issue
dcdown --remove-orphans
That worked for me.
I had Docker for Windows, switched to Docker toolbox and now back to Docker for Windows and I ran into the issues with Volumes.
Before volumes were working perfectly fine and my containers which running with nodemon/tsnode/CLI watching files were restarting properly on source code change, but now they don't at all, so it looks like file changes from host are not populated in the container.
This is docker-compose for one service:
api:
build:
context: ./api
dockerfile: Dockerfile-dev
volumes:
- ./api:/srv
working_dir: /srv
links:
- mongo
depends_on:
- mongo
ports:
- 3030:3030
environment:
MONGODB: mongodb://mongo:27017/api_test
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:api.mydomain.localhost
This id Dockerfile-dev
FROM node:10-alpine
ENV NODE_ENV development
WORKDIR /srv
EXPOSE 3030
CMD yarn dev // simply nodemon, working when ran from host
Can anyone help with that?
C drive is shared and verified with docker run --rm -v c:/Users:/data alpine ls /data showing list of files properly.
I will really appreciate any help.
We experienced the exact same problems in our team while developing nodejs/typescript applications with Docker on top of Windows and it has always been a big pain. To be honest, though, Windows does the right thing by not propagating the change event to the containers (Linux hosts also do not propagate the fsnotify events to containers unless the change is made from within the container). So bottom line: I do not think this issue will be avoidable unless you actually change the files within the container instead of changing them on the docker host. You can achieve this with a code sync tool like docker-sync, see this page for a list of available options: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync
Because we struggled with such issues for a long time, a colleague and I started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Setup minikube or a cluster with a one-click installer on some public cloud, run devspace up inside your project and you will be ready to program within your DevSpace without ever having to worry about local Docker issues and hot reloading problems. Let me know if it works for you or if there is anything you are missing.
I've been stuck into this recently (Feb 2020, Docker Desktop 2.2) and nothing from the base solutions really helped.
However when I tried WSL 2 and ran my docker-compose from inside Ubuntu shell, it became to pick up the changes in the files instantly. So if someone is observing this - try to up Docker from WSL 2.
Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.