I have multiple stacks running in my swarm. The stacks are created from compose files. For illustrative purposes let's say I have the following:
Stack 1
- Nginx container
Stack 2
- Web App Container
- Database Container
- Cache Container
- SMTP Container
Stack 3
- Web App Container
- Database Container
When I deploy these stacks docker creates an overlay network so I have 3 independent overlay networks for each stack.
What I would like to do is make it so the Web App Container in Stack 2 & 3 have access to the Stack 1 overlay network so that I can proxy_pass incoming connections without having to expose the port to the internet.
The docker website only seems to have an explanation for the legacy swarm networking: https://docs.docker.com/compose/networking/
That page says for stacks networking to refer to this page: https://docs.docker.com/engine/reference/commandline/stack_deploy/
I cannot see any information about networking on that page so i am a bit stuck.
Inside the web app service definition I have tried adding:
networks:
- nginx_default
This is the network name as shown by running docker network ls but I get an error message that this network is not defined.
What is the right way to get my web app containers and my nginx container on the same private network?
My issue was I needed to declare the network as external. Here is working sample config for stack 2:
version: "3.8"
networks:
stack2:
nginx:
external: true
services:
db:
networks:
stack2:
image: mysql:5.7
webapp:
networks:
stack2:
nginx:
image: webapp
This will connect webapp to the nginx network but the database wont be exposed to it.
Related
I scale container
ports:
- "8086-8090:8085"
But what if I needed it only inside my bridge network?
In other words, does it exists something like this?
expose:
- "8086-8090:8085"
UPDATED:
I have a master container:
exposed to host network
acts as a load balancer
I want to have N slaves of another container, exposed to assigned ports inside docker network (bot visible in host network)
Connections between containers (over the Docker-internal bridge network) don't need ports: at all, and you can just remove that block. You only need ports: to accept connections from outside of Docker. If the process inside the container is listening on port 8085 then connections between containers will always use port 8085, regardless of what ports: mappings you have or if there is one at all.
expose: in a Compose file does almost nothing at all. You never need to include it, and it's always safe to delete it.
(This wasn't the case in first-generation Docker networking. However, Compose files v2 and v3 always provide what the Docker documentation otherwise calls a "user-defined bridge network", that doesn't use "exposed ports" in any way. I'm not totally clear why the archaic expose: and links: options were kept.)
No extra changes needed!
Because of internal Docker DNS it 'hides' scaled instances under same port:
version : "3.8"
services:
web:
image: "nginx:latest"
ports:
- "8080:8080"
then
docker-compose up -d --scale web=3
calling localhost:8080 will proxy requests to all instances using Round Robin!
I have setup in which service_main stream logs on socket 127.0.0.1:6000
Simplified docker-compose.yml looks like that:
version: "3"
networks:
some_network:
driver: bridge
ipam:
driver: default
config:
- subnet: 100.100.100.0/24
gateway: 100.100.100.1
services:
service_main:
image: someimage1
networks:
some_network:
ipv4_address: 100.100.100.2
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 100.100.100.2:6000
My assumption that it SHOULD work since both containers belong to one network.
However I got an error(from service_listener) that 100.100.100.2:6000 is not available
(which i interpret that service tries to listen some public socket instead of network.)
I tried different ways, without deep understanding: expose/ publish 6000 port on service_main, or set socket for logs as 100.100.100.21:6000 and in service_listener listen 127.0.0.1:6000 (end publish port it also). But nothing works. And apparently I don't understand why.
In same network with similar approach - powerdns and postgresql works fine - I tell powerdns in config that db host is on 100.100.100.x and it works.
It all depends on what you want to do
If you want to access service_main from outside like the host the containers are running on then there are 2 ways to fix this:
Publish the port. This is done with the Ports command:
services:
service_main:
image: someimage1
ports:
- "6000:4000"
In this case port 4000 being the port where someimage1 is running on inside the Docker Container.
Use a ProxyServer which talks to the IP address of the Docker Container.
But the you need to make sure that the thing you have running inside the Docker Container (someimage1) is indeed running on port 6000.
Proxyserver
The nice thing about the proxyserver method is that you can use nginx inside another docker container and put all the deployment and networking stuff in there. (Shameless self-promotion for an example I created of a proxyserver in docker)
Non Routable Networks
And I would always use a non-routable network for internal networks, not 100.100.100.*
I assume when I publish/mapping port - I make it available not only for docker compose network but for external calls.
My problem was solved by next steps:
In configuration of service_main I set that it should stream log to socket: 100.100.100.21:6000
In service_listener I told app inside to listen 0.0.0.0:6000 port
service_listener:
image: someimage2
networks:
some_network:
ipv4_address: 100.100.100.21
entrypoint: some_app
command: listen 0.0.0.0:6000
It helped.
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy, frontend and backend. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
Reverse proxy (network: reverse-proxy):
jwilder/nginx-proxy
jrcs/letsencrypt-nginx-proxy-companion
Database
mongo:3.6.2
Project 1
one/frontend
one/backend
two/frontend
two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail#my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose. Just specify the networks you want at the top level, just like you've done for reverse-proxy:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run, but you can use docker network connect <container> <network> to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml if you like, or use docker network create and import them into your docker-compose stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up with all networks defined in the docker-compose.yml
for each app container:
docker run the container
docker network attach the right networks
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
I currently am using a docker-compose to setup a series of microservices which I want to link to a common error logging service (created outside the compose).
I am creating the errorHandling service outside of the compose.
docker run -d --name errorHandler
Then I run the compose (summarized):
version: '2'
services:
my-service:
build: ../my-service
external_links:
- errorHandler
I am using the hostname alias ('errorHandler') within my application but can't seem to get them connected. How do I check to see if the service if even discovered within the compose network?
Rather than links, use a shared docker network. Place the "errorHandler" container on a network in Docker, using something like docker network create errorNet and docker network connect errorNet errorHandler. Then define that network in your compose file for "my-service" with:
version: '2'
networks:
errorNet:
external: true
services:
my-service:
build: ../my-service
networks:
- errorNet
This uses docker's internal DNS to connect containers together.
I have 3 Docker containers, one running nginx, another php and one running serf by hashicorp.
I want to use the php exec function to call the serf binary to fire off a serf event
In my docker compose I have written
version: '2'
services:
web:
restart: always
image: `alias`/nginx-pagespeed:1.11.4
ports:
- 80
volumes:
- ./web:/var/www/html
- ./conf/nginx/default.conf:/etc/nginx/conf.d/default.conf
links:
- php
environment:
- SERVICE_NAME=${DOMAIN}
- SERVICE_TAGS=web
php:
restart: always
image: `alias`/php-fpm:7.0.11
links:
- serf
external_links:
- mysql
expose:
- "9000"
volumes:
- ./web:/var/www/html
- ./projects:/var/www/projects
- ./conf/php:/usr/local/etc/php/conf.d
serf:
restart: always
dns: 172.17.0.1
image: `alias`/serf
container_name: serf
ports:
- 7496:7496
- 7496:7496/udp
command: agent -node=${SERF_NODE} -advertise=${PRIVATE_IP}:7496 -bind=0.0.0.0:7496
I was imagining that I would do something like in php exec('serf serf event "test"') where serf is the hostname of the container.
Or perhaps someone can give an idea of how to get something like this set up using alternative methods?
The "linked" containers allow network level discovery between containers. With docker networks, the linked feature is now considered legacy and isn't really recommended anymore. To run a command in another container, you'd need to either open up a network API functionality on the target container (e.g. a REST based http request to the target container), or you need to expose the host to the source container so it can run a docker exec against the target container.
The latter requires that you install the docker client in your source container, and then expose the server with either an open port on the host or mounting the /var/run/docker.sock in the container. Since this allows the container to have root access on the host, it's not a recommended practice for anything other than administrative containers where you would otherwise trust the code running directly on the host.
Only other option I can think of is to remove the isolation between the containers with a shared volume.
An ideal solution is to use a message queuing service that allows multiple workers to spin up and process requests at their own pace. The source container sends a request to the queue, and the target container listens for requests when it's running. This also allows the system to continue even when workers are currently down, activities simply queue up.