Docker container communication and security - docker

Lets say I have 2 nodes(1 Manager and 1 Worker) and the following compose
services:
nginx:
image: nginx
ports:
- 443:443
deploy:
placement:
constraints:
- node.role==manager
networks:
- somenet
app:
image: someapp
deploy:
mode: global
networks:
- somenet
networks:
default:
driver: overlay
driver_opts:
encrypted: "true"
Where NGINX exposes the App to the outside world through HTTPS and forwards the request to one of the app replicas as a reverse proxy.
Should the communication between NGINX and App through HTTPS/SSL?
Is there any way to sniff the packets inside this overlay network?
Is it possible to gain access to any of the containers beside compromising the machine itself?

Related

Docker Swarm - Requests fail to reach a service on a different node

I've setup a Docker Swarm with Traefik v2 as the reverse proxy, and have been able to access the dashboard with no issues.
I am having an issue where I cannot get a response from any service that runs on a different node to the node Traefik is running on. I'm been testing and researching and presuming it's a network issue of some type.
I've done some quick testing with a empty Nginx image and was able to deploy another stack and get a response if the image was on the same node. Other stacks on the swarm which deploy across multiple nodes (but not including the Traefik node) are able to communicate to each other without issues).
Here is the test stack to provide some context of what I was using.
version: '3.8'
services:
test:
image: nginx:latest
deploy:
replicas: 1
placement:
constraints:
- node.role==worker
labels:
- "traefik.enable=true"
- "traefik.docker.network=uccser-dev-public"
- "traefik.http.services.test.loadbalancer.server.port=80"
- "traefik.http.routers.test.service=test"
- "traefik.http.routers.test.rule=Host(`TEST DOMAIN`) && PathPrefix(`/test`)"
- "traefik.http.routers.test.entryPoints=web"
networks:
- uccser-dev-public
networks:
uccser-dev-public:
external: true
The uccser-dev-public network is an overlay network across all nodes, with no encryption.
If I added a constraint to specify the Traefik node, then the requests worked with no issues. However, if I switched it to a different node, I get the Traefik 404 page.
The Traefik dashboard is showing it sees the service.
However the access logs show the following:
proxy_traefik.1.6fbx58k4n3fj#SWARM_NODE | IP_ADDRESS - - [21/Jul/2021:09:03:02 +0000] "GET / HTTP/2.0" - - "-" "-" 1430 "-" "-" 0ms
It's just blank, and I don't know where to proceed from here. The normal log shows no errors that I can see.
Traefik stack file:
version: '3.8'
x-default-opts:
&default-opts
logging:
options:
max-size: '1m'
max-file: '3'
services:
# Custom proxy to secure docker socket for Traefik
docker-socket:
<<: *default-opts
image: tecnativa/docker-socket-proxy
networks:
- traefik-docker
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
NETWORKS: 1
SERVICES: 1
SWARM: 1
TASKS: 1
deploy:
placement:
constraints:
- node.role == manager
# Reverse proxy for handling requests
traefik:
<<: *default-opts
image: traefik:2.4.11
networks:
- uccser-dev-public
- traefik-docker
volumes:
- traefik-public-certificates:/etc/traefik/acme/
ports:
- target: 80 # HTTP
published: 80
protocol: tcp
mode: host
- target: 443 # HTTPS
published: 443
protocol: tcp
mode: host
command:
# Docker
- --providers.docker
- --providers.docker.swarmmode
- --providers.docker.endpoint=tcp://docker-socket:2375
- --providers.docker.exposedByDefault=false
- --providers.docker.network=uccser-dev-public
- --providers.docker.watch
- --api
- --api.dashboard
- --entryPoints.web.address=:80
- --entryPoints.websecure.address=:443
- --log.level=DEBUG
- --global.sendAnonymousUsage=false
deploy:
placement:
constraints:
- node.role==worker
# Dynamic Configuration
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(`SWARM_NODE`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))"
- "traefik.http.routers.dashboard.service=api#internal"
- "traefik.http.services.dummy-svc.loadbalancer.server.port=9999" # Dummy service for Swarm port detection. The port can be any valid integer value.
volumes:
traefik-public-certificates: {}
networks:
# This network is used by other services
# to connect to the proxy.
uccser-dev-public:
external: true
# This network is used for Traefik to talk to
# the Docker socket.
traefik-docker:
driver: overlay
driver_opts:
encrypted: 'true'
Any ideas?
Further testing showed other services were working on different nodes, so figured it must be an issue with my application. Turns out my Django application still had a bunch of settings configured for it's previous hosting location regarding HTTPS. As it wasn't passing the required settings it had denied the requests before the were processed. I needed to have the logging level for gunicorn (WSGI) lower to see more information too.
In summary, Traefik and Swarm were fine.
Another reason for this can be that Docker Swarm ports haven't been opened on all of the nodes. If you're using UFW that means running the following on every machine participating in the swarm:
ufw allow 2377/tcp
ufw allow 7946/tcp
ufw allow 7946/udp
ufw allow 4789/udp

docker nginx reverse proxy 503 Service Temporarily Unavailable

I want to use nginx as reverse proxy for my remote home automation access.
My infrastructure yaml looks like follows:
# /infrastructure/docker-compose.yaml
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
container_name: proxy
networks:
- raspberry_network
ports:
- 80:80
- 443:443
environment:
- ENABLE_IPV6=true
- DEFAULT_HOST=${RASPBERRY_IP}
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d
- ./proxy/vhost.d:/etc/nginx/vhost.d
- ./proxy/html:/usr/share/nginx/html
- ./proxy/certs:/etc/nginx/certs
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: always
networks:
raspberry_network:
My yaml containing the app configuration looks like this:
# /apps/docker-compose.yaml
version: '3'
services:
homeassistant:
container_name: home-assistant
image: homeassistant/raspberrypi4-homeassistant:stable
volumes:
- ./homeassistant:/config
- /etc/localtime:/etc/localtime:ro
environment:
- 'TZ=Europe/Berlin'
- 'VIRTUAL_HOST=${HOMEASSISTANT_VIRTUAL_HOST}'
- 'VIRTUAL_PORT=8123'
deploy:
resources:
limits:
memory: 250M
restart: unless-stopped
networks:
- infrastructure_raspberry_network
ports:
- '8123:8123'
networks:
infrastructure_raspberry_network:
external: true
Via portainer I validated that both containers are contected to the same network. However, when accessing my local IP of the raspberry pi 192.168.0.10 I am receiving "503 Service Temporarily Unavailable".
Of course when I try accessing my app via the virtual host domain xxx.xxx.de it neither works.
Any idea what the issue might be? Or any ideas how to further debug this?
You need to specify the correct VIRTUAL_HOST in the backends environment variable and make sure that they're on the same network (or docker bridge network)
Make sure that any containers that specify VIRTUAL_HOST are running before the nginx-proxy container runs. With docker-compose, this can be achieved by adding to depends_on config of the nginx-proxy container

Traefik Docker Swarm Unifi setup does is not reachable from domain

I have been trying to setup a Docker Swarm setup with Traefik:v2.x for some time and searched wide and broad on Google, but I still cannot connect to my reverse proxy from my outside domain.
My setup is as following:
Hardware (from outer to inner):
Technicolor MediaAccess TG799vac Xtream (modem)
|
Unifi Security Gateway (Unifi Controller is a Raspberry Pi)
|
x86_64 server where my (currently) single docker swarm node is running
Both domain and wildcard domain is pointing at my system and if I am running a single container with port 80 exposed it is working from the domain. As soon as I set it up for Traefik I can't reach my containers from outside, but my test container can be reach with curl commands from inside my network. Even if I curl the USG.
On the server I have installed Docker + Docker Swarm and running the following 2 stacks:
version: '3'
services:
reverse-proxy:
image: traefik:v2.3.4
command:
- "--providers.docker.endpoint=unix:///var/run/docker.sock"
- "--providers.docker.swarmMode=true"
- "--providers.docker.exposedbydefault=false"
- "--providers.docker.network=traefik-public"
- "--entrypoints.web.address=:80"
ports:
- 80:80
- 8080:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-public
deploy:
placement:
constraints:
- node.role == manager
networks:
traefik-public:
external: true
and
version: '3'
services:
helloworld:
image: nginx
networks:
- traefik-public
deploy:
labels:
- "traefik.enable=true"
- "traefik.http.routers.helloworld.rule=Host(`test.mydomain.com`)"
- "traefik.http.routers.helloworld.entrypoints=web"
- "traefik.http.services.helloworld.loadbalancer.server.port=80"
networks:
traefik-public:
external: true
I little update, it is possible to access a regular container with port 80 exposed on my domain, but as soon as I spin the container up with Docker Swarm, it is no longer exposed to the internet.
The network is created as follow, which I also used for a regular container:
docker network create -d overlay --attachable test
and the yml:
version: '3'
services:
nginx:
image: nginx
ports:
- 80:80
networks:
- test-swarm-network
networks:
test:
external: true
So the above does not work but the following is visible on my domain from the outside:
docker run -d -p 80:80 nginx

How to use zabbix-web-nginx-mysql with existing nginx container?

I am trying to use docker on my debian server. There are several sites using Django framework. Every project run in it's own container with gunicorn, single nginx container works as a reverse proxy, data stores in mariadb container. Everything works correctly. It is necessary to add zabbix monitoring system on server. So, I use zabbix-server-mysql image as a zabbix-backend and zabbix-web-nginx-mysql image as a frontend. Backend run successfully, frontend fails with errors such as: "can't binding to 0.0.0.0:80 port is already allocated", nginx refuse connection to domains. As I understand, zabbix-web-nginx-mysql create another nginx container and it causes problems. Is there a right way to use zabbix images with existing nginx container?
I have a nginx reverse proxy installed on the host, which I use for proxy redirect into container. I have a working configuration for docker zabbix with the following configuration (I have omitted the environment variables).
My port 80 for the web application is served through anoter which is same set on nginx proxy_pass. Here the configuration
version: '2'
services:
zabbix-server4:
container_name: zabbix-server4
image: zabbix/zabbix-server-mysql:alpine-4.0.5
user: root
networks:
zbx_net:
aliases:
- zabbix-server4
- zabbix-server4-mysql
ipv4_address: 172.16.238.5
zabbix-web4:
container_name: zabbix-web4
image: zabbix/zabbix-web-nginx-mysql:alpine-4.0.5
ports:
- 127.0.0.1:11011:80
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-web4
- zabbix-web4-nginx-alpine
- zabbix-web4-nginx-mysql
ipv4_address: 172.16.238.10
zabbix-agent4:
container_name: zabbix-agent4
image: zabbix/zabbix-agent:alpine-4.0.5
links:
- zabbix-server4
networks:
zbx_net:
aliases:
- zabbix-agent4
ipv4_address: 172.16.238.15
networks:
zbx_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: 172.16.238.0/24
gateway: 172.16.238.1

Exposing a Docker database service only on the internal network with Traefik

Let's say I defined two services "frontend" and "db" in my docker-compose.yml which are deployed to a Docker swarm, i.e. they may also run in different stacks. With this setup Traefik automatically generates the frontend and backend for each stack which is fine.
Now I have another Docker container running temporarily in a Jenkins pipeline which shall be able to access the db service in a specific stack. My first idea was to expose the db service by adding it to the cluster-global-net network so that Traefik can generate a frontend route to the bakend. This basically works.
But I'd like to hide the database service from "the public" while still being able to connect another Docker container to it via its stack or service name using the internal "default" network.
Can this be done somehow?
version: '3.6'
networks:
default: {}
cluster-global-net:
external: true
services:
frontend:
image: frontend_image
ports:
- 8080
networks:
- cluster-global-net
- default
deploy:
labels:
traefik.port: 8080
traefik.docker.network: cluster-global-net
traefik.backend.loadbalancer.swarm: 'true'
traefik.backend.loadbalancer.stickiness: 'true'
replicas: 1
restart_policy:
condition: any
db:
image: db_image
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=false
- MYSQL_DATABASE=db_schema
- MYSQL_USER=db_user
- MYSQL_PASSWORD=db_pass
ports:
- 3306
volumes:
- db_volume:/var/lib/mysql
networks:
- default
restart: on-failure
deploy:
labels:
traefik.port: 3306
traefik.docker.network: default
What you need is a network on which both of them are deployed, but that it's not visible from anyone else.
To do such, create a network , add it to your db service and frontend, and also to your temporary service. And indeed, remove traefik label on db because they are not needed anymore here.
EG :
...
networks:
default: {}
cluster-global-net:
external: true
db-net:
external: true
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker network create db-net
docker stack deploy -c <mycompose.yml> <myfront>
docker service create --network db-net <myTemporaryImage> <temporaryService>
Then, the temporaryService as well as the frontend can reach the db through db:3306
BTW : you don't need to open the port for the frontend, since traefik will access it internally (trafik.port).
EDIT : new exemple with network created from compose file.
...
networks:
default: {}
cluster-global-net:
external: true
db-net: {}
services:
frontend:
image: frontend_image
networks:
- cluster-global-net
- default
- db-net
deploy:
...
db:
image: db_image
...
networks:
- default
- db-net
restart: on-failure
#no labels
docker stack deploy -c <mycompose.yml> someStackName
docker service create --network someStackName_db-net <myTemporaryImage> <temporaryService>

Resources