Docker-Compose for multiple github microservices - docker

I'm trying to find a solution for managing my local environment using docker-compose for multiple microservices.
Each microservice has own github repository and can depend on another microservice for example Order service communicate with Product service.
All microservices create one complete sollution so when working locally I need to run every microservice with docker-compose up - maybe there is a way to automate this with create just one docker-compose that contains all microservices containers.
At this moment I got this directory structure.
Projects
Project A
- docker-compose.yml (contains 3 containers)
Project B
- docker-compose.yml (contains 3 containers)

You can create a docker-compose.yaml that contain all those services. Then you can set Dockerfile path for each service.
Projects
Project A
Project B
docker-compose.yml
docker-compose.yaml example:
version: '3'
services:
project-a:
build:
context: ./Project-A
dockerfile: Dockerfile
project-b:
build:
context: ./Project-B
dockerfile: Dockerfile

Related

How to share prepared files on build stage between containers with docker compose

I have 2 services: nginx and web
When I build web image I build the frontend via the command npm install && npm run build
But I need prepared files in both containers: in the web and in the nginx.
How to share files between containers (images)? I can't simply use volumes, because they will be mounted only in runtime.
The Dockerfile COPY directive can copy files from an arbitrary image. While it's most commonly used in multi-stage builds, you can use it with any image, even one you built yourself.
Say your docker-compose.yml file looks like:
version: '3.8'
services:
web:
build: .
image: my/web
nginx:
build:
context: .
dockerfile: Dockerfile.nginx
ports: [8000:80]
Note that we've explicitly given the web image a name; also notice that there are no volumes: in this setup.
In the proxy image, we can then copy files out of that image:
# Dockerfile.nginx
FROM nginx
COPY --from=my/web /app/static /usr/share/nginx/html
The only complication here is that Compose doesn't know that one image is built off of the other. You'll probably have to manually tell it to rebuild the application image so that it gets built before the proxy image.
docker-compose build web
docker-compose build
docker-compose up -d
You can use this in a more production-oriented setup to deploy this application without having the code directly available. You can create a base docker-compose.yml that names an image: for both containers, and then add a separate docker-compose.override.yml file that has the build: blocks. After running docker-compose build twice as above, you can docker-compose push the built images, and then run this container stack on your production system getting the images from the registry; without a local copy of the source tree and without volumes.

Access container_name in Dockerfile (from docker-compose)

I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?
In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.

Best practice - having multiple docker-compose files in a repo

I'am currently working on a fullstack web project that consists of the following components:
Database (MariaDB)
Frontend (Angular)
Backend (NodeJS)
Every component should be deployable through docker. For that I have a Dockerfile for each of them. I also defined a docker-compose in the repository root to deploy all of them together.
# current repo structure
|frontend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|backend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|database/
|src/
|docker/
-Dockerfile
-docker-compose.yml
-docker-compose.yml
Do you think this is good practice? I am unsure because I think this my current structure is kind of confusing. How do you handle it in similar projects?
docker-compose is designed to orchestrate multiple components of a project in one single place: docker-compose file.
In your case, and as m303945 said, you don't need multiple docker-compose files. Indeed, your main docker-compose.yml should call the Dockerfile of each of your component. This file could contain something like this:
services:
frontend:
build:
context: frontend
dockerfile: docker/Dockerfile
backend:
build:
context: backend
dockerfile: docker/Dockerfile
database:
build:
context: database
dockerfile: docker/Dockerfile
you dont need multiple docker-compose files. if you want to run specific app together, for example only database and backend just run this command.
docker-compose -f docker-compose-file.yml up -d database backend
which database and backend is service name in the docker-compose file.

Micro Services With Docker Compose: Same Container, Multiple Projects

Along with a few others, I am having issues using a micro services architecture of applications and employing docker-compose the way I want to.
Summary:
I have X micro service projects (lets call these project A, project B and project C. Each micro service depends on the same containers (lets call these dependency D and dependency E.
The Problem:
Ideally, project A, B and C would ALL have both dependencies (D & E) in their docker-compose.yml files; however, this becomes an issue as docker compose sees these as duplicate containers when in reality, I would like to reuse them. Here is an error message that is commonly seen:
ERROR: for A Cannot create container for service A: b'Conflict. The
container name "/A" is already in use by container "sha". You have to
remove (or rename) that container to be able to reuse that name.'
From what I have seen, people are recommending that you define the container in one project and reference it using networks and external links. Although this works, it introduces a dependency on a different docker-compose yml file (the file that defines the dependency!).
Another approach that I've read argues for isolation of containers in their docker compose files and then referencing multiple files when you want to build. Again, although this works, its certainly not as stunningly convenient as docker typically is to work with. If I am unable to work out a solution, I will go with this approach.
Have other people in the non-mono repo world (specifically with micro services) had any success with a different approach?
I've been ask to clarify with some examples:
Here is what 2 different compose yml files look like for project A and project B:
Project A:
version: '2'
services:
dependencyD:
image: dependencyD:latest
container_name: dependencyD
dependencyE:
image: dependencyE:latest
container_name: dependencyE
projectA:
image: projectA:latest
container_name: projectA
depends_on:
- dependencyD
- dependencyE
Project B:
version: '2'
services:
dependencyD:
image: dependencyD:latest
container_name: dependencyD
dependencyE:
image: dependencyE:latest
container_name: dependencyE
projectB:
image: projectB:latest
container_name: projectB
depends_on:
- dependencyD
- dependencyE
There is a feature called external links. From the docs:
Link to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services.
Having multiple docker-compose.yml files is also common to organize containers into meaningful groups. Maybe your scenario can use multiple YAML files and the external links.

What is the difference between `docker-compose build` and `docker build`?

What is the difference between docker-compose build and docker build?
Suppose in a dockerized project path there is a docker-compose.yml file:
docker-compose build
And
docker build
docker-compose can be considered a wrapper around the docker CLI (in fact it is another implementation in python as said in the comments) in order to gain time and avoid 500 characters-long lines (and also start multiple containers at the same time). It uses a file called docker-compose.yml in order to retrieve parameters.
You can find the reference for the docker-compose file format here.
So basically docker-compose build will read your docker-compose.yml, look for all services containing the build: statement and run a docker build for each one.
Each build: can specify a Dockerfile, a context and args to pass to docker.
To conclude with an example docker-compose.yml file :
version: '3.2'
services:
database:
image: mariadb
restart: always
volumes:
- ./.data/sql:/var/lib/mysql
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
depends_on:
- database
When calling docker-compose build, only the web target will need an image to be built. The docker build command would look like :
docker build -t web_myproject -f Dockerfile-alpine ./web
docker-compose build will build the services in the docker-compose.yml file.
https://docs.docker.com/compose/reference/build/
docker build will build the image defined by Dockerfile.
https://docs.docker.com/engine/reference/commandline/build/
Basically, docker-compose is a better way to use docker than just a docker command.
If the question here is if docker-compose build command, will build a zip kind of thing containing multiple images, which otherwise would have been built separately with usual Dockerfile, then the thinking is wrong.
Docker-compose build, will build individual images, by going into individual service entry in docker-compose.yml.
With docker images, command, we can see all the individual images being saved as well.
The real magic is docker-compose up.
This one will basically create a network of interconnected containers, that can talk to each other with name of container similar to a hostname.
Adding to the first answer...
You can give the image name and container name under the service definition.
e.g. for the service called 'web' in the below docker-compose example, you can give the image name and container name explicitly, so that docker does not have to use the defaults.
Otherwise the image name that docker will use will be the concatenation of the folder (Directory) and the service name. e.g. myprojectdir_web
So it is better to explicitly put the desired image name that will be generated when docker build command is executed.
e.g.
image: mywebserviceImage
container_name: my-webServiceImage-Container
example docker-compose.yml file :
version: '3.2'
services:
web:
build:
dockerfile: Dockerfile-alpine
context: ./web
ports:
- 8099:80
image: mywebserviceImage
container_name: my-webServiceImage-Container
depends_on:
- database
Few additional words about the difference between docker build and docker-compose build.
Both have an option for building images using an existing image as a cache of layers.
with docker build, the option is --cache-from <image>
with docker-composer, there is a tag cache_from in the build section.
Unfortunately, up until now, at this level, images made by one are not compatible with the other as a cache of layers (Ids are not compatible).
However, docker-compose v1.25.0 (2019-11-18), introduces an experimental feature COMPOSE_DOCKER_CLI_BUILD so that docker-compose uses native docker builder (therefore, images made by docker build can be used as a cache of layers for docker-compose build)

Resources