Multiple apps (microservices) and one proxy (nginx) docker-compose configuration/architecture - docker

Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.

There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.

docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)

Related

Docker and Traefik different directories

I have 2 different directories each containing docker containers for different purposes and both spun up with docker compose.
Dir A has Traefik config and container (and other containers) as well as environment variables whereas Dir B is a bunch of containers.
I want to now include Traefik labels in Dir B containers but when I run compose in Dir B, I'm facing:
WARN[0000] The "DOMAIN_NAME" variable is not set. Defaulting to a blank string.
service "[service name]" refers to undefined network traefik_proxy: invalid compose project
I'm guessing this is because services in Dir B can't see traefik_proxy since it's part of a different stack and same with the DOMAIN_NAME variable.
How can I have Dir B 'reach across' to Dir A? Is it even possible with my current config?
If you want to have multiple compose projects share a single Traefik frontend, that's certainly possible, but you need to place Traefik on a shared network. For this model, I would suggest starting with a docker-compose.yaml that only deploys Traefik; e.g:
version: "3"
services:
traefik:
image: docker.io/traefik:latest
command:
- --api.insecure=true
- --providers.docker
- --accesslog=true
- --accesslog.filepath=/dev/stderr
- --providers.docker.exposedByDefault=false
ports:
- "80:80"
- "443:443"
- "127.0.0.2:8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
services:
external: true
Start by creating the shared network:
docker network create services
And then starting the Traefik project:
pushd traefik; docker-compose up -d; popd
Now for every project you want to make available via Traefik, put your services on the services network. For example, let's say we have this in app1/docker-compose.yaml:
version: "3"
services:
app1:
image: docker.io/containous/whoami
networks:
- services
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=PathPrefix(`/app1`)"
networks:
services:
external: true
Then I can run:
pushd app1; docker-compose up -d; popd
And now my app1 service is available at http://localhost/app1/.
We can add as many services as we want like this; the only requirement is that the containers are attached to the services network.

How to make a docker compose service to use multiple network

everyone, I have a requirement to write a docker-compose.yml which need to make one of service to use two network, one for the default for communication with each other service and one for the external bridge network for auto self discovery via nginx-proxy.
My docker-compose.yml like the belows.
version: '2'
services:
dns-management-frontend:
image: ......
depends_on:
- dns-management-backend
ports:
- 80
restart: always
networks:
- default
- bridge
dns-management-backend:
image:......
depends_on:
- db
- redis
restart: always
networks:
- default
db:
image: ......
volumes:
- ./mysql-data:/var/lib/mysql
restart: always
networks:
- default
redis:
image: redis
ports:
- 6379
restart: always
networks:
- default
networks:
default:
bridge:
external:
name: bridge
networks:
- default
When I start with it, it gave me network-scoped alias is supported only for containers in user defined networks error. I have to remove the networks section in services, and after started, manually ran docker network connect <id_of_frontend_container> bridge to make it work.
Any advice on how to configure multiple network in docker-compose? I also have read https://docs.docker.com/compose/networking/, but it is too simple.
The Docker network named bridge is special; most notably, it doesn't provide DNS-based service discovery.
For your proxy service, you should docker network create some other network, named anything other than bridge, either docker network connect the existing container to it or restart the proxy --net the_new_network_name. In the docker-compose.yml file change the external: {name: ...} to the new network name.
Any advice on how to configure multiple network in docker-compose?
As you note Docker Compose (and for that matter Docker proper) doesn't support especially involved network topologies. At the half-dozen-containers scale where Compose works well, you don't really need an involved network topology. Use the default network that Docker Compose provides for you, and don't bother manually configuring networks: unless it's actually necessary (as the external proxy is in your question).
you can not mixing for now the default bridge with other networks in compose
issue is still open ...

docker: split structure into usefull networks

I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy, frontend and backend. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
Reverse proxy (network: reverse-proxy):
jwilder/nginx-proxy
jrcs/letsencrypt-nginx-proxy-companion
Database
mongo:3.6.2
Project 1
one/frontend
one/backend
two/frontend
two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail#my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose. Just specify the networks you want at the top level, just like you've done for reverse-proxy:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run, but you can use docker network connect <container> <network> to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml if you like, or use docker network create and import them into your docker-compose stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up with all networks defined in the docker-compose.yml
for each app container:
docker run the container
docker network attach the right networks
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.

How to reach services with the URL in Docker-Compose

Im trying to setup an application environment with two different docker-compose.yml files. The first one creates services in the default network elastic-apm-stack_default. To reach the services of both docker-compose files I used the external command within the second docker-compose file. Both files look like this:
# elastic-apm-stack/docker-compose.yml
services:
apm-server:
image: docker.elastic.co/apm/apm-server:6.2.4
build: ./apm_server
ports:
- 8200:8200
depends_on:
- elasticsearch
- kibana
...
# sockshop/docker-compose.yml
services:
front-end:
...
...
networks:
- elastic-apm-stack_default
networks:
elastic-apm-stack_default:
external: true
Now the front-end service in the second file needs to send data to the apm-server service in the first file. Therefore, I used the url http://apm-server:8200 in the source code of the front-end service but i always get an connectionRefused error. If I define all services in a single docker-compose file it works but I want to separate the docker-compose files.
Could anyone help me? :)
Docker containers run in network 172.17.0.1
So, you may use url
http://172.17.0.1:8200
to get access to your apm-server container.

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

Resources