I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy, frontend and backend. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
Reverse proxy (network: reverse-proxy):
jwilder/nginx-proxy
jrcs/letsencrypt-nginx-proxy-companion
Database
mongo:3.6.2
Project 1
one/frontend
one/backend
two/frontend
two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail#my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose. Just specify the networks you want at the top level, just like you've done for reverse-proxy:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run, but you can use docker network connect <container> <network> to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml if you like, or use docker network create and import them into your docker-compose stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up with all networks defined in the docker-compose.yml
for each app container:
docker run the container
docker network attach the right networks
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
Related
I'm learning about SSL and encryption between docker containers for two particular scenarios. In both scenarios Traefik container is a reverse proxy between the external world and a NodeJS API container. The NodeJS API simply writes a record to the mongo database container. In both scenarios, Traefik will use SSL to encrypt any traffic between itself and the external world. I think traffic between Traefik and the NodeJS API will happen in plain text because the NodeJS API will not use an SSL certificate. And I think the data between the NodeJS API and the database will also be in plain text.
Here is a docker-compose file that I have, which I think is described by my statements above.
version: '3'
services:
traefik:
image: traefik:v2.5.4
command:
- "--providers.docker"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedByDefault=false"
- "--providers.file.directory=/etc/traefik/dynamic_conf"
- "--providers.file.watch=true"
- "--entrypoints.websecure.address=:3001"
ports:
- 80:80
- 3001:3001
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./templates/ssl/:/certs/:ro
- ./traefik.config.yml:/etc/traefik/dynamic_conf/conf.yml:ro
networks:
- mynet
api:
image: johnlai2004/swarm:latest
environment:
- "DB=${STACK_NAME}_db"
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.${STACK_NAME}_api.entrypoints=websecure"
- "traefik.http.routers.${STACK_NAME}_api.tls=true"
- "traefik.http.routers.${STACK_NAME}_api.rule=Host(`${APP_HOST}`)"
- "traefik.http.services.${STACK_NAME}_api.loadbalancer.server.port=3000"
depends_on:
- db
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
external: true
Then I start things up with these commands:
docker swarm init;
docker network create --driver overlay mynet;
STACK_NAME=mystack1 APP_HOST=mystack1.example.com docker stack deploy -c docker-compose.yml mystack1;
Question 1: All the code above runs on one single host machine. Would many developers be concerned about the unencrypted traffic that happens between Traefik and the NodeJS API and the unencrypted traffic between NodeJS API and Mongo database? I assume the traffic in these two situations are happening within a virtual docker network that is not easily accessible to attackers from the outside world unless the attacker already has access to the host machine? Are these reasonable assumptions?
Question 2: Let's say I created a second host machine that is located in a different city. I run the docker swarm join so that this second machine becomes a worker node. Then I run the command docker service scale api=4 which creates four more instances of the api container, some of which will run on this second host machine. In this situation, will attackers have an easy time sniffing the traffic between Traefik and the API and sniffing the traffic between the API and the Database? And will attackers see the traffic in plain text?
Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)
everyone, I have a requirement to write a docker-compose.yml which need to make one of service to use two network, one for the default for communication with each other service and one for the external bridge network for auto self discovery via nginx-proxy.
My docker-compose.yml like the belows.
version: '2'
services:
dns-management-frontend:
image: ......
depends_on:
- dns-management-backend
ports:
- 80
restart: always
networks:
- default
- bridge
dns-management-backend:
image:......
depends_on:
- db
- redis
restart: always
networks:
- default
db:
image: ......
volumes:
- ./mysql-data:/var/lib/mysql
restart: always
networks:
- default
redis:
image: redis
ports:
- 6379
restart: always
networks:
- default
networks:
default:
bridge:
external:
name: bridge
networks:
- default
When I start with it, it gave me network-scoped alias is supported only for containers in user defined networks error. I have to remove the networks section in services, and after started, manually ran docker network connect <id_of_frontend_container> bridge to make it work.
Any advice on how to configure multiple network in docker-compose? I also have read https://docs.docker.com/compose/networking/, but it is too simple.
The Docker network named bridge is special; most notably, it doesn't provide DNS-based service discovery.
For your proxy service, you should docker network create some other network, named anything other than bridge, either docker network connect the existing container to it or restart the proxy --net the_new_network_name. In the docker-compose.yml file change the external: {name: ...} to the new network name.
Any advice on how to configure multiple network in docker-compose?
As you note Docker Compose (and for that matter Docker proper) doesn't support especially involved network topologies. At the half-dozen-containers scale where Compose works well, you don't really need an involved network topology. Use the default network that Docker Compose provides for you, and don't bother manually configuring networks: unless it's actually necessary (as the external proxy is in your question).
you can not mixing for now the default bridge with other networks in compose
issue is still open ...
I'm running a mongo instance with docker-compose and traefik.
myapp-mongo:
build: ../images/myapp-mongo
restart: always
ports:
- "27017:27017"
labels:
- "traefik.ports=27017,27018"
- "traefik.backend=myapp-mongo"
- "traefik.frontend.rule=Host:myapp-mongo.docker.localhost"
networks:
- development
environment:
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWD=${MONGO_PASSWD}
- MONGO_AUTHDB=${MONGO_AUTHDB}
Mongo is running fine and I can connect using 127.0.0.1 from my Mac.
The problem is that I can't connect using hostname myapp-mongo.docker.localhost. It only works using IP 127.0.0.1.
Trying to ping the IP 127.0.0.1 responds ok, but trying to ping the hostname doesn't work.
I've already added 127.0.0.1 proxy.docker.localhost into /etc/hosts to get traefik working.
All other web apps has hostnames working fine like eg myapp.docker.localhost. This problem is only happening with this mongodb container.
Probably because Træfik is HTTP proxy and so will only support HTTP/HTTPS connections.
I believe #bpatel is right (see comment I left on his answer with link to github conversation) Traefik at the time of writing only supports HTTP/HTTPS.
Solution using native docker networks
However, you can get around this issue! Since you are using docker, you can work around by using the container name in your code (assuming mongo and your mongo accessing code are both running in containers on a shared docker network. This will be the case if the containers are spun up with docker-compose). Run the following to see if your containers are linked up correctly:
run docker ps to get your container names running (under the NAMES column)
run docker network ls to see your network names
run docker network inspect <target_network_name> to verify your containers from step 1 are on the same network.
I run docker-compose from three separate compose files, so you should be able to cover most cases from the following (apologies for any syntax errors, the following are stripped down code examples):
Entire docker-compose file that that starts up traefik (under directory name 'proxy')
version: '2'
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
webgateway:
driver: bridge
snippet from my docker-compose file that spins up mongo
version: '2'
services:
database:
image: mongo
ports:
- "27017:27017"
networks:
- web
networks:
web:
external:
name: proxy_webgateway
snippet from docker-compose that has mongo accessing code
version: '2'
services:
topicOntologyBuilder:
image: topic-ontology-builder
labels:
- "traefik.backend=topicOntologyBuilder"
- "traefik.port=80"
- "traefik.frontend.rule=Host:topic-ontology.docker.localhost"
networks:
- web
volumes:
- ./:/home
networks:
web:
external:
name: proxy_webgateway
Connection in Code
Not certain what language you're using, this is what the following js code looked like for me to connect to mongo (inside that 'topicOntologyBuilder` container, while using traefik as the proxy (again, this works because we're making the most of docker networks):
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://<MONGO_CONTAINER_NAME>/<DB_NAME>', function(err, db) {
//insert code here to interact with mongo
})
Why this works
This works because docker does some clever DNS stuff within the containers so that each container knows the IP of other containers, by looking it up in their DNS entry, by the container names
Extra intel
If your containers are on separate computers/vm's, you'll probably want to play around with a service discovery tool (Consul plays well with Traefik) or do something fancy with a docker network overlay which is specific for containers in a cluster.
If using raw docker networks, you can assign container aliases (this doesn't work with Traefik though, or at least it didn't a couple months back).
I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.