Docker Compose: Change COMPOSE_PROJECT_NAME without rebuilding the application - docker

Summary:
I have an application X, I want to deploy multiple instances of the same application (port numbers will be handled by an .env) in the same OS without starting a build for each instance.
What I tried:
So I managed to dynamically (by the user changing .env file), change the container_name of a container. But then we cannot run 5 instances at the same time (even if the ports are different, docker just stops the first re-creates the container for second)
Next I came across COMPOSE_PROJECT_NAME that seems to work BUT starts a new build.
COMPOSE_PROJECT_NAME=hello-01
docker-compose up
Creating network "hello-01_default" with the default driver
Building test
Step 1/2 : FROM ubuntu:latest
---> 113a43faa138
Step 2/2 : RUN echo Hello
---> Using cache
---> ba846acc19e5
Successfully built ba846acc19e5
Successfully tagged hello-01_test:latest
WARNING: Image for service test was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating hello-01_test ... done
Attaching to hello-01_test
hello-01_test exited with code 0
COMPOSE_PROJECT_NAME=hello-2
docker-compose up
Creating network "hello-02_default" with the default driver
Building test
Step 1/2 : FROM ubuntu:latest
---> 113a43faa138
Step 2/2 : RUN echo Hello
---> Using cache
---> ba846acc19e5
Successfully built ba846acc19e5
Successfully tagged hello-02_test:latest
WARNING: Image for service test was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating hello-02_test ... done
Attaching to hello-02_test
hello-02_test exited with code 0
Source files
docker-compose.yml
version: '3'
services:
test:
container_name: "${COMPOSE_PROJECT_NAME}_test"
build: .
.env
COMPOSE_PROJECT_NAME=hello-02
Dockerfile
FROM ubuntu:latest
RUN echo Hello
Ubuntu 18.04.1 LTS
Docker version 18.06.0-ce, build 0ffa825
docker-compose version 1.21.2, build a133471

By changing the container name without providing an image: reference the compose file has no idea that you've already built that image. So if you build that docker image as some local image example/image/local, you can addimage: example/image/localto your docker-compose file and do that to spawndocker-compose up -d` many times by changing the name with an environment variable in your example.
However, it appears that you might want to look into using replicas instead of making this a horrifically manual effort outside of on the one-line full up that you'd get out of docker-compose.
https://docs.docker.com/compose/compose-file/#short-syntax

Related

Manual build works, while build via docker-compose fails

Installed Versions
Application
Version
Docker
19.03.6, build 369ce74a3c
Docker Compose
v2.13.0
OS
Ubuntu 18.04.2 LTS
Docker definitions
docker-compose.yml
version: "2.3"
services:
builder:
build:
context: ./
dockerfile: Dockerfile
args:
- NODE_VERSION=${NODE_VERSION:-12.22.7}
image: redacted/node:${NODE_VERSION}
volumes:
- ./:/code
environment:
- BUILDKIT_PROGRESS=plain
.env
CLIENT=${CLIENT_PREFIX:-xx}
PUBLIC_URL=/${CLIENT}-dashboard
REACT_APP_NODEJS_SERVER=${CLIENT}-server
NODE_VERSION=12.22.7
Dockerfile
ARG NODE_VERSION="12.22.7"
FROM node:${NODE_VERSION}
VOLUME ["/code"]
WORKDIR /code
CMD "/code/build_ui.sh"
Issue description
Our project requires multiple versions of node & npm installed. To avoid compatibility issues, we are trying to use docker to stabilize the versions we need.
We use the below command to run the build for our application:
docker-compose run --rm builder
This works on some of our servers, but on some servers, I get either of the below errors:
failed to solve: failed to solve with frontend dockerfile.v0: failed to build llb: failed to load cache key: rpc error: code = unknown desc = error getting credentials - err: exit status 1, out: cannot autolaunch d-bus without x11 $display ubuntu.
Fixed this by following the guide here. However, given that I was trying to pull node and that is a public repo, I don't understand why docker was attempting a login. And why didn't this happen on all servers?
docker run Error response from daemon: No command specified
Fixed this temporarily by manually building the image using docker build command. But I really want the docker-compose to be able to build the image if the image doesn't exist.
I was expecting docker-compose to build the image on first run without issues and run the build_ui.sh on container execution.
When I get the above errors, if I manually build the docker image (and not wait for docker-compose to build it) using the below command, and then use the docker-compose run command it works.
docker build -t redacted/node:12.22.7 .
I am trying to figure out why docker-compose is not building the image correctly when the image doesn't exist.

Docker compose couldn't run static html pages

I have developed some static web-pages using jQuery & bootstrap.Here follows the folder structure,
Using below command i can able to run the docker image
Build the image
docker build -t weather-ui:1.0.0 .
Run the docker image
docker run -it -p 9080:80 weather-ui:1.0.0
Which is working fine and i can able to see the pages using http://docker-host:9080
But i would like to create a docker-compose for it,I have created a docker-compose file like below
version: '2'
services:
weather-ui:
build: .
image: weather-ui:1.0.0
volumes:
- .:/app
ports:
- "9080:9080"
The above compose file was not working and it stuck,
$docker-compose up
Building weather-ui
Step 1 : FROM nginx:1.11-alpine
---> bedece1f06cc
Step 2 : MAINTAINER ***
---> Using cache
---> ef75a70d43e8
Step 3 : COPY . /usr/share/nginx/html
---> 6fbc3a1d4aff
Removing intermediate container 2dc46f1f751d
Successfully built 6fbc3a1d4aff
WARNING: Image for service weather-ui was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Recreating weatherui_weather-ui_1 ...
Recreating weatherui_weather-ui_1 ... done
Attaching to weatherui_weather-ui_1
It stuck in the above line and i really don't know why it stuck?
Any pointers or hint would be great to resolve this issue.
As per Antonio edit,
I can see the running container,
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69ea4ff1a3ea weather-ui:1.0.2 "nginx -g 'daemon ..." 6 seconds ago Up 5 seconds 80/tcp, 443/tcp, 0.0.0.0:9080->9080/tcp weatherui_weather-ui_1
But while launching the page i couldn't see anything.It says the site can't be reached
docker-compose up build your docker container (if not already done) and attach the container to your console.
If you open your browser, and go to http://localhost:9080, you should see your website.
You don't need to map a volume : volumes: - .:/app in docker-compose.yml because you already copy static files in Dockerfile :
COPY . /usr/share/nginx/html
If you want to launch your container in the background (or in "detached" mode), add -d option : docker-compose up -d.
And by default docker-compose to not "rebuild" container if already exists, to build new container each time, add --build option : docker-compose up -d --build.

Docker COPY not updating files when rebuilding container

I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.

docker compose build single container

Using Compose, if I run docker-compose build, it will rebuild all the containers :
> docker-compose build
Building elasticsearch
Step 1 : FROM elasticsearch:2.1
---> a05cc7ed3f32
Step 2 : RUN /usr/share/elasticsearch/bin/plugin install analysis-phonetic
---> Using cache
---> ec07bbdb8a18
Successfully built ec07bbdb8a18
Building search
Step 1 : FROM php:5.5.28-fpm
---> fcd24d1058c0
...
Even when rebuilding using cache, this takes time. So my question is:
Is there a way to rebuild only one specific container?
Yes, use the name of the service:
docker-compose build elasticsearch
docker-compose up -d --no-deps --build <service_name>
Source
if you want to run and recreate a specific service inside your docker-compose file you can do it the same way as #dnephin proposed, like
$ docker-compose up -d --force-recreate --no-deps --build service_name
Suppose your docker-compose.yml file is like
version: '3'
services:
service_1:
.....
service_2:
.....
You could added --no-start flag to docker-compose, and start later since you will only build one service.

ordered build of nested docker images with compose

I am building a lamp with docker-compose.
In my docker-compose.yml i have the following:
ubuntu-base:
build: ./ubuntu-base
webserver-base:
build: ./webserver-base
webserver-base is derived from the ubuntu-base image.
In webserver-base Dockerfile:
FROM docker_ubuntu-base
ubuntu-base is built
FROM ubuntu:14.04
Now, if i execute the docker-compose.yml, it does not build the ubuntu-base image, but its trying to build the webserver-base image and fails, because it does not find the ubuntu-base image.
Output:
$ docker-compose up -d
Building webserver-base
Step 1 : FROM docker_ubuntu-base
Pulling repository docker.io/library/docker_ubuntu-base
ERROR: Service 'webserver-base' failed to build: Error: image library/docker_ubuntu-base:latest not found
It all works if i build the ubuntu-base image manually first.
why does it not build the ubuntu-base image?
Sadly, build ordering is a missing feature in docker-compose, that is requested for many month now.
As workaround you can link the containers like this:
ubuntu-base:
build: ./ubuntu-base
webserver-base:
build: ./webserver-base
links:
- ubuntu-base
this way ubuntu-base gets built before webserver-base.
First do a
docker-compose build ubuntu-base
But this will not create the image docker_ubuntu-base locally because you do not have any build steps. Only docker.io/ubuntu:14.04 will be downloaded.
If you add a build step like:
FROM ubuntu:14.04
RUN date
A docker_ubuntu-base image will be created.
So first do a:
docker-compose build ubuntu-base
This will create the image docker_ubuntu-base. Then you can do a docker-compose build.
But I would advise against this nested-docker image construction. This is cumbersome because as #kev indicated you have no control over the order of the builds. Why don't you create two independent docker files? Let docker derive webserver-base from ubuntu-base by keeping the Dockerfile instructions as identical as possible and reusing the layers.

Resources