Best practice - having multiple docker-compose files in a repo - docker

I'am currently working on a fullstack web project that consists of the following components:
Database (MariaDB)
Frontend (Angular)
Backend (NodeJS)
Every component should be deployable through docker. For that I have a Dockerfile for each of them. I also defined a docker-compose in the repository root to deploy all of them together.
# current repo structure
|frontend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|backend/
|src/
|docker/
-Dockerfile
-docker-compose.yml
|database/
|src/
|docker/
-Dockerfile
-docker-compose.yml
-docker-compose.yml
Do you think this is good practice? I am unsure because I think this my current structure is kind of confusing. How do you handle it in similar projects?

docker-compose is designed to orchestrate multiple components of a project in one single place: docker-compose file.
In your case, and as m303945 said, you don't need multiple docker-compose files. Indeed, your main docker-compose.yml should call the Dockerfile of each of your component. This file could contain something like this:
services:
frontend:
build:
context: frontend
dockerfile: docker/Dockerfile
backend:
build:
context: backend
dockerfile: docker/Dockerfile
database:
build:
context: database
dockerfile: docker/Dockerfile

you dont need multiple docker-compose files. if you want to run specific app together, for example only database and backend just run this command.
docker-compose -f docker-compose-file.yml up -d database backend
which database and backend is service name in the docker-compose file.

Related

Docker-Compose for multiple github microservices

I'm trying to find a solution for managing my local environment using docker-compose for multiple microservices.
Each microservice has own github repository and can depend on another microservice for example Order service communicate with Product service.
All microservices create one complete sollution so when working locally I need to run every microservice with docker-compose up - maybe there is a way to automate this with create just one docker-compose that contains all microservices containers.
At this moment I got this directory structure.
Projects
Project A
- docker-compose.yml (contains 3 containers)
Project B
- docker-compose.yml (contains 3 containers)
You can create a docker-compose.yaml that contain all those services. Then you can set Dockerfile path for each service.
Projects
Project A
Project B
docker-compose.yml
docker-compose.yaml example:
version: '3'
services:
project-a:
build:
context: ./Project-A
dockerfile: Dockerfile
project-b:
build:
context: ./Project-B
dockerfile: Dockerfile

How to create a docker service without project name prefix in its name?

I have just started learning Docker to build Microservices. I am trying to understand and follow eShopOnContainers app as my reference application to understand all the concepts. For testing, I created two ASP.Net Web API services and created a docker-compose.yml file to test if I can run them. I am able to run the services but one thing I have noticed is that service names are not very tidy. They contain the folder name prefix. For example here is part of my docker-compose.yml file
services:
orders-api:
image: orders-api
build:
context: .
dockerfile: src/Services/Orders/Dockerfile
ports:
- "1122:80"
I am expecting that when this service runs, it should be named orders-api but instead it's name becomes micrservicestest_orders-api_1. MicroservicesTest is the folder name of my project. I was trying to find a way around it but it seems like this is the limitation of Docker itself. The only thing I don't understand is that when I run the sample app of eShopOnContainers, their services have readable names without any prefixes. How are they able to generate more readable service names?
Can you please tell me what am I missing here?
docker-compose automatically adds prefix to your containers' name.
By default, prefix is the basename of the directory that you executed docker-compose up.
You can change it by using $ docker-compose --project-name MY_PROJECT_NAME, but this is not you want.
You can specify containers' name by using container_name:.
services:
orders-api:
image: orders-api
build:
context: .
dockerfile: src/Services/Orders/Dockerfile
container_name: orders-api
ports:
- "1122:80"

Docker compose with name other than dockerfile

I have used docker to create CLI interfaces where I test my code. These are named reasonably as:
proj_root/.../docks/foo.dockerfile
proj_root/.../docks/bar.dockerfile
Because there is more than one dock involved, the top level "Dockerfile" at the project root is unreasonable. Although I can't copy ancestor directories when building in docker, I can clone my entire repo.
So my project architecture works for me.
Next, I look up docker-compose because I need to match my docker cards up against a postgres db and expose some ports.
However, docker-compose seems to be anchored to the hard-coded '"Dockerfile" in the current working directory' user concept from the perspective of the command line interface.
But! I see the error message implies the tool is capable of looking for an arbitrarily named dockerfile:
ERROR: Cannot locate specified Dockerfile: Dockerfile
The question is: how do I set docker-compose off looking for foo.dockerfile rather than ./Dockerfile?
In your docker-compose, under the service:
services:
serviceA:
build:
context: <folder of your project>
dockerfile: <path and name to your Dockerfile>
As mentioned in the documentation of docker-compose.yml, you can overwrite the Dockerfile filename within the build properties of your docker-compose services.
For example:
version: 3
services:
foo:
image: user/foo
build:
context: .../docks
dockerfile: foo.Dockerfile
bar:
image: user/bar
build:
context: .../docks
dockerfile: bar.Dockerfile

Docker compose volumes_from not updating after rebuild

Imagine two containers: webserver (1) is hosting static HTML files that need to be built form templates inside a data volume container (2).
docker-compose.yml file looks something like this:
version: "2"
services:
webserver:
build: ./web
ports:
- "80:80"
volumes_from:
- templates
templates:
build: ./templates
Dockerfile for templates service looks like this
FROM ruby:2.3
# ... there is more but that is should not be important
WORKDIR /tmp
COPY ./Gemfile /tmp/Gemfile
RUN bundle install
COPY ./source /tmp/source
RUN bundle exec middleman build --clean
VOLUME /tmp/build
When I run docker-compose up everything is working as expected: templates are built, webserver hosts them and you can view them in the browser.
Problem is, when I update the ./source and restart/rebuild the setup, the files the webserver hosts are still the old ones, although the log shows that the container was rebuilt - at least the last three layers after COPY ./source /tmp/source. So the changes inside the source folder are picked up by the rebuilt but I'm not able to get the changes shown in the browser.
What am I doing wrong?
Compose preserves volumes when containers are recreated, which is probably why you are seeing the old files.
Generally it is not a good idea to use volumes for source code (or in this case static html files). Volumes are for data you want to persist, like data in a database. Source code changes with each version of the image, so doesn't really belong in a volume.
Instead of using a data volume container for these files, you can use a builder container to compile them and a webserver service to host them. You'll need to add a COPY to the webserver Dockerfile to include the files.
To accomplish this you would change your docker-compose.yml to this:
version: "2"
services:
webserver:
image: myapp:latest
ports: ["80:80"]
Now you just need to build myapp:latest. You could write a script which:
builds the builder container
runs the builder container
builds the myapp container
You can also use a tool like dobi instead of writing a script (disclaimer: I am the author of this tool). There is an example of building a minimal docker image which is very similar to what you're trying to do.
Your dobi.yaml might look something like this:
image=builder:
image: myapp-dev
context: ./templates
job=templates:
use: builder
image=webserver:
image: myapp
tags: [latest]
context: .
depends: [templates]
compose=serve:
files: [docker-compose.yml]
depends: [webserver]
Now if you run dobi serve it will do all the steps for you. Each step will only be run if files have changed.

How to ADD sibling directory to Docker image

Is there a way to copy a sibling directory into my docker image, i.e., something like
ADD ../sibling_directory /usr/local/src/web/
This is not permitted - according to the Docker documentation, all resources accessible by my Dockerfile must be under the Dockerfile working directory.
In my scenario, I am in the process of splitting out worker services from web services from a common code base, and I'd like to do that logically first without having to actually physically separate the code.
Here's what a potential fig.yml might look like:
web:
build: ./web/
volumes:
- /usr/local/src/web/
worker:
build: ./worker/
volumes_from:
- web

Resources