Docker - Reuse docker-compose configurations for different projects - docker

I have a large Docker project with Dockerfiles for nginx, apache2, varnish, redis configured and working well after weeks of changes and testing.
I am now at a point where I setup the projects to use docker-compose and override.yml files for easy setup:
I am trying to use the same docker-compose setup for multiple projects (websites)
Normal startup (using docker-compose.yml and optional docker-compose.override.yml)
docker-compose up -d
Custom startup (using specific docker-compose files)
docker-compose -f docker-compose.yml -f custom/docker-compose.website1.yml up -d
Both these methods starts up fine:
docker-compose ps
Ignore the fact that they are Exit 0 - I stopped them using docker-compose stop, the containers work fine
nginx-proxy /usr/bin/supervisord Exit 0
redis-cache /usr/bin/supervisord Exit 0
varnish-cache /usr/bin/supervisord Exit 0
web-server-apache2 /usr/bin/supervisord Exit 0
Now I want a second project (website) to use the same docker/docker-compose configuration setup:
docker-compose -f docker-compose.yml -f anothercustomfolder/docker-compose.website2.yml up -d
To my surprise docker-compose recreated containers and do not create a new set of containers:
See 'current setup' section for how I setup things.
Creating network "delete-network-frontend" with the default driver
Recreating nginx-proxy ... done
Recreating varnish-cache ... done
Recreating web-server ... done
Recreating redis-cache ... done
When running docker-compose ps in the second setup folder:
Note the names are not the same as above (this is the second test setup)
Name Command State Ports
------------------------------------------------------------------------------------------------
nginx-proxy-delete /usr/bin/supervisord Up 0.0.0.0:443->443/tcp,
0.0.0.0:80->80/tcp
redis-cache-delete /usr/bin/supervisord Up 0.0.0.0:6379->6379/tcp
varnish-cache-delete /usr/bin/supervisord Up 0.0.0.0:6081->6081/tcp,
0.0.0.0:6082->6082/tcp
web-server- /usr/bin/supervisord Up 0.0.0.0:8080->8080/tcp
apache2-delete
It appears docker-compose did two things : 1. Recreate (replace) the project 1 containers, used the project 1 container names to mention that they were 'recreated', and 2. Remove the project 1 containers, renamed it to project 2 containers.
Current setup
I created a full Dockerfile project configured with docker-compose.yml and two override docker-compose files (docker-compose.website1.yml and docker-compose.website2.yml`).
I made a complete copy of the working Dockerfile / docker-compose.yml project and created a new folder: In other words both these will use the same docker setup but use different docker-compose.yml override files.
/var/www/docker/site1
/var/www/docker/site2
Question
TLDR: How do I use a working docker-compose project on the same host operating system for multiple projects... without it replacing another project's containers.
I want to be able to see (use both) at the same time, and for instance be able to see this:
Ignore the fact that the ports are the same here, I am aware they won't run at the same time, I will update the project docker-compose.yml custom files when this works
docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------------
nginx-proxy /usr/bin/supervisord Up 0.0.0.0:443->443/tcp,
0.0.0.0:80->80/tcp
redis-cache /usr/bin/supervisord Up 0.0.0.0:6379->6379/tcp
varnish-cache /usr/bin/supervisord Up 0.0.0.0:6081->6081/tcp,
0.0.0.0:6082->6082/tcp
web-server- /usr/bin/supervisord Up 0.0.0.0:8080->8080/tcp
apache2
nginx-proxy /usr/bin/supervisord Up 0.0.0.0:443->443/tcp,
0.0.0.0:80->80/tcp
redis-cache-delete /usr/bin/supervisord Up 0.0.0.0:6379->6379/tcp
varnish-cache-delete /usr/bin/supervisord Up 0.0.0.0:6081->6081/tcp,
0.0.0.0:6082->6082/tcp
web-server- /usr/bin/supervisord Up 0.0.0.0:8080->8080/tcp
apache2-delete
If anyone asks: Why not just put the websites into the same (one) container??
For the possibility someone might ask this, I know I can add multiple websites into the /etc/apache2/sites-enabled (or nginx) and add custom configuration files using ADD in Dockerfile for each site, but using that method I cannot test different slight setups.
Different setups that can be used by referencing another different image in the 'override docker-compose files'
For instance I can create a Dockerfile that installs all php7.3 libraries required and run Magento 2.3 on it, then have another Dockerfile to test php7.4, and have another to run an older Magento 1 site on a PHP5.6 installation and so on.

Thanks to advice from David Maze, I struggled further with configuring the docker-compose setup to work with multiple projects.
Information based on docker-compose v1.25.0 (July 2020)
This discussion is especially important when you want to re-use (persist) your containers (start/stop instead of just up/down - deleting)
As initially pointed out in my question - if you try to create containers using docker-compose up -d there are some pitfalls which the tool simply does not support right at the moment.
Pitfalls
PITFALLS OF CURRENT DOCKER-COMPOSE IMPLEMENTATION:
If you just use overridden docker-compose*.yml with different container_names (per 'project') with files in the same folder
docker-compose up will simply replace existing containers as explained in my question.
You can do the following: docker-compose -p CUSTOM_PROJECT_NAME -f file1.yml -f file2.yml up -d, but:
This on its own is useless - these containers will only work until you want stop them. As soon as you want to do docker-compose start (to restart existing container set) it will simply fail with Error: No containers to start
If you use two different folders with the same docker-compose project (ie cloned project): for instance ./dc-project1 ./dc-project2 but using container_name field inside docker-compose.*.yml file:
When you try to run docker-compose -f f1.yml -f f2.yml up -d inside ./dc-project1 and the same inside ./dc-project2 folder, you will get the following error: You have to remove (or rename) that container to be able to reuse that name.
Similar issues with your Docker network will occur with docker-compose when you use overridden files:
Removed most of the custom settings to make the network setting clearer:
Network will be attached correctly from your overridden file on docker-compose up, but as soon as you want to docker-compose start it looks for your default network name: in the default docker-compose.yml or even the docker-compose.override.yml file if it exists. In other words - it ignores your custom docker-compose override files (see example below):
docker-compose.yml:
networks:
network_frontend:
name: stage6-network-frontend
customfolder/docker-compose.custom.yml:
networks:
network_frontend:
name: magento2.3-network-frontend
SOLUTION
Example
Objective : to get docker-compose start/stop to work correctly with multiple setups (aka projects/websites/tools)using the same docker-compose project.
Suppose you have the following docker-compose files:
**Main file: ** docker-compose.yml:
web_server:
image: current_timezone/full-supervisord-web-server/php7.3:1.00
container_name: web-server-apache2
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: ./web-server/Dockerfile.webserver.apache2
args:
volumes:
- website_data:/var/www/html
ports:
- "8080:8080"
networks:
network_frontend:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.100.0.0/16
name: stage6-network-frontend
driver_opts:
# Custom name for host-side network : for instance on Ubuntu : ip addr | ifconfig
com.docker.network.bridge.name: docker-custom # Seems limit of 15 characters only
and then an override file: customfolder/magento2.override.yml:
web_server:
container_name: web-server-apache2-magento2.3.5
networks:
- network_frontend
build:
args:
volumes:
- website_data:/var/www/html
ports:
- "8080:8080"
networks:
network_frontend:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.100.0.0/16
driver_opts:
# Custom name for host-side network : for instance on Ubuntu : ip addr | ifconfig
com.docker.network.bridge.name: d-glo-femag2_35 # Seems limit of 15 characters only
name: glo-magento2.3-network-frontend
Do the following:
Copy the full Docker project (Dockerfiles/ADDs/docker-compose.yml files etc) into a new seperate folder:
/var/docker/project1
/var/docker/project2
Make sure that the container_name entries in your override docker-compose.yml are unique between the two projects.
In project project1 run docker-compose -f docker-compose.yml -f customfolder/magento2.override.yml up -d && docker-compose stop, navigate to project2 and do the same.
Using -p flag as David Maze suggested does not work on its own, JSON files are still sourced as ./foldername on docker-compose start/stop
Since networks are having similar issues on start/stop , before you can correctly use your custom name defined in your override file, .... unfortunately you need to update the main base docker-compose.yml to the overridden file!
Extended explanation: There is no way to call the correct custom network name from docker-compose start, so since docker-compose ignores the overridden files on start, you need to make sure to update the base file docker-compose.yml or docker-compose.override.yml has your custom network name.
In case you have not updated the names before using up -d, you will need to replace the content of each /var/lib/docker/containers/*/config.v2.json.
For example you could do this: you have to stop docker first
sudo service docker stop
find /var/lib/docker/containers/ -type f -name "config.v2.json" -exec sed -i "s|wrong-network-name|overridden-network-name|g" '{}' \;
sudo service docker start
IF done correctly, you should have unique container names, and each folder can be accessed separately correctly now without it breaking each other's containers: docker-compose start, docker-compose stop, docker-compose ps
NOTE: You still need to navigate to the seperate folder to run those commands

Related

Run two docker containers from subdirectory does not find config file

I would like to run two docker containers via docker-compose.
The following project structure is present:
|- service
|- docker-compose
|- DockerFile
|- config.yaml
|- client
|- docker-compose
|- DockerFile
I try to run two containers via the following command:
$ docker-compose -f ./client/docker-compose.yml -f ./service/docker-compose.yml up
Everything seems to be working just fine, except for one error message:
{"action":"startup","error":"config file 'config.yaml' not found.","level":"error","msg":"could not load config","time":"2020-03-17T15:17:19Z"}
But when I navigate into each directory and run them separately everything works just fine. So it seems that the config file which is stated in the volume is somehow not found.
The docker-compose file is:
./service/docker-compose.yml
services:
thing:
command:
- --host
- 0.0.0.0
- --port
- '8080'
- --scheme
- http
- --config-file
- config.yaml
ports:
- 8080:8080
volumes:
- ./config.yaml:/config.yaml
web:
build: .
environment:
- web_host=http://my-site.com:8080
depends_on:
- anotherThing
links: thing:thing.com
version: '3.4'
The configuration file itself has some arbitrary info:
---
authentication:
my_arbitrary_key:
enabled: true
Any idea how to make sure the config file is found when running docker-compose from a sub-directory? Or am I misusing the docker-compose command...
Update
Interestingly enough if I swap the files in the docker-compose command I get it doesn't run.
So when I use:
$ docker-compose -f ./service/docker-compose.yml -f ./client/docker-compose.yml up
Docker doesn't run and the error is:
Cannot create container for service thing: invalid volume specification: '/Users/user/Site/service/config.yaml:config.yaml:rw': invalid mount config for type "bind": invalid mount path: 'config.yaml' mount path must be absolute
It's looking for config file in outer directory, you could use absolute path or put it in some .env
Best practice would be to use single docker-compose file which could have relative path of other folder
Brief background : Docker-compose is utility to run multiple docker containers - called as services, - Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Hence it's not necessary to have two docker-compose files over here , we should have one which will start both services also you can user docker-compose to run one service if required
Find more details at - https://docs.docker.com/compose/

Why does docker-compose up not seem to sync volumes

Here is a simplified version of my docker-compose.yml (it's the volume in buggy-service that does not behave as I expect):
version: '3.4'
services:
local-db:
image: postgres:9.6
environment:
- DB_NAME=${DB_NAME}
# other env vars (not important)
ports:
- 5432:5432
volumes:
- ~/.docker-volumes/${DB_NAME}/postgresql/data:/var/lib/postgresql/data
- postgresql:/docker-entrypoint-initdb.d
buggy-service:
build:
context: .
dockerfile: Dockerfile.test
target: buggy-image
args:
# bunch of args (not important)
volumes:
- /Users/me/temp:/temp
volumes:
postgresql:
driver_opts:
type: none
device: /Users/me/postgresql
o: bind
If I do docker-compose -f docker-compose.yml up -d local-db, a container for it starts up automatically and I find that /Users/me/postgresql on the host machine (Mac OSX) binds correctly to /docker-entrypoint-initdb.d with content synced.
However, if I do docker-compose -f docker-compose.yml up --build -d buggy-service, a container does not start up automatically.
Question: How do I get buggy-service to behave like local-db, i.e., start up automatically with the required volume mounted?
Here's the stripped down version of Dockerfile.test referenced by buggy-service:
FROM microsoft/dotnet:2.1-sdk-alpine AS buggy-image
# Bunch of ARG definitions (not important)
VOLUME /temp
# other stuff (not important)
ENTRYPOINT ["/bin/bash"]
# Other FROMs
Edit 1
A bit more info about what I’m trying to achieve...
The buggy-container I’m trying to get working runs .Net Core as the base image. Its purpose is to run dotnet test and generate coverage reports, which can then be consumed in the host, which may either be a local dev machine or a build server (in this case, BitBucket pipelines).
... followed by docker run -dit --name buggy-container buggy-image
This command creates a new container, not based on anything in the compose yml file. Without a volume specification, it will only get an anonymous volume since you've defined the volume in the Dockerfile (I tend to recommend against defining a volume there). You can see the anonymous volumes with a docker volume ls command, they'll be the ones with a long unique id and no reference to what they belong to.
To define a host volume from docker run, you need the -v flag:
docker run -dit -v /Users/me/temp:/temp --name buggy-container buggy-image
From your now changed question, you have a new issue. Your container specifies a single command to run in the entrypoint:
ENTRYPOINT ["/bin/bash"]
When bash runs, it reads input from stdin. When that input ends, like when you run a container with no input attached, bash will exit. When the process your container runs exits, the container exits. From the details available, I can't tell you what that command should be, but a good starting point is to look at other images on docker hub that perform a similar task that you're trying to run, and look at the Dockerfile they use (many hub images point back to a GitHub repo with the full source).

Start particular service from docker-compose

I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.

Why does docker-compose depends on working directory?

When calling docker-compose in different directories, I get conflict errors and problems with networking:
Problem with conflicts
docker-compose.yml
version: '3'
services:
redis:
image: "redis:alpine"
container_name: redis
I. create and start docker container by docker-compose => OK
$ docker-compose up --force-recreate -d
Creating redis ... done
II. recreate and start docker container by docker-compose => OK
$ docker-compose up --force-recreate -d
Recreating redis ... done
III. copy docker-compose.yml to other directory.
Then try to recreate from other directory => ERROR
$ cp docker-compose.yml red2/
$ cd red2/
$ docker-compose up --force-recreate -d
Creating redis ... error
ERROR: for redis Cannot create container for service redis: Conflict. The container name "/redis" is already in use by container "1ba060b545f716731ac1c5992b680e4d4b3639fc0ffeb291899c712f0839d23a". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.
Different Networks
Containers created from docker-compose in different directories also do not share the same network.
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
4a4af52e89cd red2_default bridge local
57695428bd9d redis_default bridge local
Usecase
My usecase for that szenario:
Call docker-compose from different deployment jobs.
Start containers for testing
Questions
Why is there the directory dependency? Is there an option to switch it off?
Does docker ps show which directory was used?
Answer for 1:
The directory name is used as the default project name.
You should better specify the project name:
docker-compose -p myproject up --force-recreate -d
Question 2 still open

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

Resources