I want to have two docker-compose files, where one overrides another.
(The motivation comes from Docker Compose Docs)
The use case comes from the buildbot environment. The first docker-compose file should define a simple service. This is a service that is going to be tested. Let's take
version: '2'
services:
service-node:
build:
context: ./res
dockerfile: Dockerfile
image: my/server
env_file: .env
The second docker-compose file (let's name it docker-compose.test.yml) overrides the service-node to add a buildbot worker feature, and creates the second container, i.e. buildbot master node, that is going to control testing machinery. Let's take
version: '2'
services:
service-node:
build:
context: ./res
dockerfile: buildbot.worker.Dockerfile
image: my/buildbot-worker
container_name: bb-worker
env_file: ./res/buildbot.worker.env
environment:
- BB_RES_DIR=/var/lib/buildbot
networks:
testlab:
aliases:
- bb-worker
volumes:
- ./vol/bldbot/worker:/home/bldbotworker
depends_on:
- bb-master
bb-master:
build:
context: ./res
dockerfile: buildbot.master.Dockerfile
image: my/buildbot-master
container_name: bb-master
env_file: ./res/buildbot.master.env
environment:
- BB_RES_DIR=/var/lib/buildbot
networks:
- testlab
expose:
- "9989"
volumes:
- ./vol/bldbot/master:/var/lib/buildbot
networks:
testlab:
driver: bridge
Generally this configuration works, i.e. the command
docker-compose -f docker-compose.yml -f docker-compose.test.yml up -d
builds both images and runs both containers, but there is one shortcoming, i.e. the command
docker-compose ps
shows only one service, bb-worker. At the same time
docker ps
shows both.
Furthermore, the command
docker-compose down
stops only one service, and outputs the message/warning Found orphan containers. Of course, the message refers to bb-master.
How can I override the basic docker-compose.yml file to be able to add additional non-orphan service?
You need to run all docker-compose commands with the flags, e.g.:
docker-compose -f docker-compose.yml -f docker-compose.test.yml down
Alternatively, you can make this the default by writing the following to a .env file in the same folder:
COMPOSE_FILE=docker-compose.yml:docker-compose.test.yml
NOTE:
In windows you need tu use ";" as the separator (#louisvno)
Related
I'm running docker compose as follows:
docker-compose -f docker-compose.dev.yml up --build -d
the contents of docker-compose.dev.yml are:
version: '3'
services:
client:
container_name: client
build:
context: frontend
environment:
- CADDY_SUBDOMAIN=xxx
- PRIVATE_IP=xxx
restart: always
ports:
- "80:80"
- "443:443"
links:
- express
volumes:
- /home/ec2-user/.caddy:/root/.caddy
express:
container_name: express
build: express
environment:
- NODE_ENV=development
restart: always
Then I want to create images from these containers to use them in a testing server by pushing them to aws ECR and pulling on the test server, to avoid the time of creating the dockers all over again. Simply using docker commit did not worked.
what is the correct approach to creating images from outputs of docker compose?
thanks
You should basically never use docker commit. The standard approach is to describe how to build your images using a Dockerfile, and check that file into source control. You can push the built image to a registry like Docker Hub, and you can check out the original source code and rebuild the image.
The good news is that you basically have this setup already. Each of your Compose services has a build: block that has the data on how to build the image. So it's enough to
docker-compose build
and you'll get a separate Docker image for each component.
Often if you're doing this you'll also want to push the images to some Docker registry. In the Compose setup, you can specify an image: for each service as well. If you have both build: and image:, that specifies the image name to use for the built image (otherwise Compose will pick one based on the project name).
version: '3.8'
services:
client:
build:
context: frontend
image: registry.example.com/project/frontend
et: cetera
express:
build: express
image: registry.example.com/project/express
et: cetera
Then you can have Compose both build and push the images
docker-compose build
docker-compose push
One final technique that can be useful is to split the Compose setup into two files. The main docker-compose.yml file has the setup you'd need to run the set of containers, on any system, with access to the container registry. A separate docker-compose.override.yml file would support developer use where you have a copy of the source code as well. If you're using Compose for deployment, you only need to copy the main docker-compose.yml file to the target system.
# docker-compose.yml
version: '3.8'
services:
client:
image: registry.example.com/project/frontend
ports: [...]
environment: [...]
restart: always
# volumes: [...]
express:
image: registry.example.com/project/express
ports: [...]
environment: [...]
restart: always
# docker-compose.override.yml
version: '3.8'
services:
client:
build: frontend
# all other settings come from main docker-compose.yml
express:
build: express
# all other settings come from main docker-compose.yml
for some reason, I need to create the container with the same image, But when I started the second one, It just restarted the fist one's container
the first yml file:
version: "3.1"
services:
php:
image:php:php73-fpm
restart: always
ports:
- "9000:9000"
- "9501:9501"
volumes:
- $PWD/../:/var/www/html/
networks:
- app_net
container_name: php
networks:
app_net:
driver: bridge
the second yml file:
version: "3.1"
services:
php:
image:php:php73-fpm
restart: always
ports:
- "19000:19000"
- "19501:19501"
volumes:
- $PWD/../:/var/www/html/
networks:
- app_net2
container_name: php73
networks:
app_net2:
driver: bridge
when I run docker-compose up -d to start the first one:
$ cd ~/Document/php/work/docker/
$ docker-compose up -d
Creating network "docker_app_net" with driver "bridge"
Creating php ... done
then I switch the directory to the second yml file
$ cd ../../private/docker/
$ docker-compose up -d
Recreating php ... done
Compose has a notion of a project name. By default the project name is the basename of the directory containing the docker-compose.yml file. In your example both directories are named docker (even if they're in different parent directories) so Compose looks for a project named docker and a container named php, and finds a match.
There are four ways to override this:
Rename one of the directories.
Set the COMPOSE_PROJECT_NAME environment variable.
Create a .env file in the current directory, and set COMPOSE_PROJECT_NAME there.
Use the docker-compose -p option (on every docker-compose command).
Within your docker-compose.yml file, the second part of ports: needs to match what the container is listening on; this is allowed to be different from the first part. So use the same 9500/9501 in both files.
Another consequence of the Compose project naming is that the standard names of containers, volumes, and networks that Compose creates will be prefixed with the project name. If the project name (current directory name) is docker2, and you reduce the Compose file to
version: "3.1"
services:
php:
build: .
restart: always
ports:
- "19000:9000"
- "19501:9501"
# no manual container_name: or networks:
The container will be named docker2_php_1, and it will be attached to a network named docker2_default; these will be different from the container/network created in the docker1 project/directory.
You can't have two containers with the same name. Since both names are just php, Docker thought they were settings that were supposed to be merged for the same container. Rename one of them.
Docker doesn't use the latest code after running git checkout <non_master_branch>, while I can see it in the vscode.
I am using the following docker-compose file:
version: '2'
volumes:
pgdata:
backend_app:
services:
nginx:
container_name: nginx-angular-dev
image: nginx-angular-dev
build:
context: ./frontend
dockerfile: /.docker/nginx.dockerfile
ports:
- "80:80"
- "443:443"
depends_on:
- web
web:
container_name: django-app-dev
image: django-app-dev
build:
context: ./backend
dockerfile: /django.dockerfile
command: ["./wait-for-postgres.sh", "db", "./django-entrypoint.sh"]
volumes:
- backend_app:/code
ports:
- "8000:8000"
depends_on:
- db
env_file: .env
environment:
FRONTEND_BASE_URL: http://192.168.99.100/
BACKEND_BASE_URL: http://192.168.99.100/api/
MODE_ENV: DOCKER_DEV
db:
container_name: django-db
image: postgres:10
env_file: .env
volumes:
- pgdata:/var/lib/postgresql/data
I have tried docker-compose build --no-cache, followed by docker-compose up --force-recreate but it didn't solve the problem.
What is the root of my problem?
Your volumes: are causing problems. Docker volumes aren't intended to hold code, and you should delete the volume declarations that mention backend_app:.
Your docker-compose.yml file says in part:
volumes:
backend_app:
services:
web:
volumes:
- backend_app:/code
backend_app is a named volume: it keeps data that must be persisted across container runs. If the volume doesn't exist yet the first time then data will be copied into it from the image, but after that, Docker considers it to contain critical user data that must not be updated.
If you keep code or libraries in a Docker volume, Docker will never update it, even if the underlying image changes. This is a common problem in JavaScript applications that mount an anonymous volume on their node_modules directory.
As a temporary workaround, if you docker-compose down -v, it will delete all of the volumes, including the one with your code in it, and the next time you start it will get recreated from the image.
The best solution is to simply not use a volume here at all. Delete the lines above from your docker-compose.yml file. Develop and test your application in a non-Docker environment, and when you're ready to do integration testing, run docker-compose up --build. Your code will live in the image, and an ordinary docker build will produce a new image with new code.
I am trying to allow nginx to proxy between multiple containers while also accessing the static files from those containers.
To share volumes between containers created using docker compose, the following works correctly:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
depends_on:
- web
volumes:
static-files:
However what I actually require is for the nginx compose file to be in a separate file and also in a completely different folder. In other words, the docker compose up commands would be run separately. I have tried the following:
First compose file:
version: '3.6'
services:
web:
build:
context: .
dockerfile: ./Dockerfile
image: webtest
command: ./start.sh
volumes:
- .:/code
- static-files:/static/teststaticfiles
networks:
- directorylocation-nginx_mynetwork
volumes:
static-files:
networks:
directorylocation-nginx_mynetwork:
external: true
Second compose file (ie: nginx):
version: '3.6'
services:
nginx:
image: nginx:1.15.8-alpine
ports:
- "80:80"
volumes:
- ./nginx-config:/etc/nginx/conf.d
- static-files:/static/teststaticfiles
networks:
- mynetwork
volumes:
static-files:
networks:
mynetwork:
The above two files work correctly in the sense that the site can be viewed. The problem is that the static files are not available in the nginx container. The site therefore displays without any images etc.
One work around which works correctly found here is to change the nginx container static files volume to instead be as follows:
- /var/lib/docker/volumes/directory_static-files/_data:/static/teststaticfiles
The above works correctly, but it seems 'hacky' and brittle. Is there another way to share volumes between containers which are housed in different compose files without needing to map the /var/lib/docker/volumes directory.
By separating the 2 docker-compose.yml files as you did in your question, 2 different volumes are actually created; that's the reason you don't see data from web service inside volume of nginx service, because there are just 2 different volumes.
Example : let's say you have the following structure :
example/
|- web/
|- docker-compose.yml # your first docker compose file
|- nginx/
|- docker-compose.yml # your second docker compose file
Running docker-compose up from web folder (or docker-compose -f web/docker-compose.yml up from example directory) will actually create a volume named web_static-files (name of the volume defined in docker-compose.yml file, prefixed by the folder where this file is located).
So, running docker-compose up from nginx folder will actually create nginx_static-files instead of re-using web_static-files as you want.
You can use the volume created by web/docker-compose.yml by specifying in the 2nd docker compose file (nginx/docker-compose.yml) that this is an external volume, and its name :
volumes:
static-files:
external:
name: web_static-files
Note that if you don't want the volume (and all resources) to be prefixed by the folder name (default), but by something else, you can add -p option to docker-compose command :
docker-compose \
-f web/docker-compose.yml \
-p abcd \
up
This command will now create a volume named abcd_static-files (that you can use in the 2nd docker compose file).
You can also define the volumes creation on its own docker-compose file (like volumes/docker-compose.yml) :
version: '3.6'
volumes:
static-files:
And reference this volume as external, with name volumes_static-files, in web and nginx docker-compose.yml files :
volumes:
volumes_static-files:
external: true
Unfortunately, you cannot set the volume name in docker compose, it will be automatically prefixed. If this is really a problem, you can also create the volume manually (docker volume create static-files) before running any docker-compose up command (I do not recommand this solution though because it adds a manual step that can be forgotten if you reproduce your deployment on another environment).
I'm trying to migrate working docker config files (Dockerfile and docker-compose.yml) so they deploy working local docker configuration to docker hub.
Tried multiple config file settings.
I have the following Dockerfile and, below, the docker-compose.yml that uses it. When I run "docker-compose up", I successfully get two containers running that can either be accessed independently or will talk to each other via the "db" and the database "container_name". So far so good.
What I cannot figure out is how to take this configuration (the files below) and modify them so I get the same behavior on docker hub. Being able to have working local containers is necessary for development, but others need to use these containers on docker hub so I need to deploy there.
--
Dockerfile:
FROM tomcat:8.0.20-jre8
COPY ./services.war /usr/local/tomcat/webapps/
--
docker-compose.yml:
version: '3'
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8089:8080"
volumes:
- /Users/user/Library/apache-tomcat-9.0.7/conf/tomcat-users.xml:/usr/local/tomcat/conf/tomcat-users.xml
depends_on:
- db
db:
image: mysql:5.7
container_name: test-mysql-docker
ports:
- 3307:3306
volumes:
- ./ZipCodeLookup.sql:/docker-entrypoint-initdb.d/ZipCodeLookup.sql
environment:
MYSQL_ROOT_PASSWORD: "thepass"
Expect to see running containers on docker hub, but cannot see how these files need to be modified to get that. Thanks.
Add an image attribute.
app:
build:
context: .
dockerfile: Dockerfile
ports:
image: docker-hub-username/app
Replace "docker-hub-username" with your username. Then run docker-compose push app