Docker server networking - reject incoming connections but allow outgoing - docker

We use Docker containers to deploy multiple small applications on our servers that are reachable on the public internet. Some of the services need to communicate to each other, but are deployed on different servers, due to different hardware requirements (the servers are on different network and different IP).
Q: What would be the best way to configure blocking of incoming requests to SERVER:PORT except for some allowed IPs and at the same time allow all outgoing connections of the Docker containers?
Two major things we played with and tried out to get them working:
Bound Docker port mappings to 127.0.0.1 and route every traffic through an nginx. This is really config heavy and some infrastructure components aren't possible to proxy via http(s), so we need to add them to nginx.conf stream-server block and therefore open a port on the server (that is accessible by everyone).
Use iptables to restrict access to the published ports. So something like this: iptables -A INPUT -I DOCKER-USER -p tcp -i eth0 -j DROP. But this also have 2 major downfalls. First it seems that it's quite hard to allow multiple IP adresses in such a construct and on the other hand this approach seems to block our docker outgoing connections (to the internet) as well. E. g.: After we activated it a ping google.com from within a docker container was rejected.

Not sure I get this. In term of design, what is available to the external world is in a DMZ or published through an API gateway.
Your docker swarm/kubernetes cluster shall not be accessible directly through the internet or only the API gateway or the application on the DMZ.
So quite likely your docker server shall not be accessible directly. And even if that is the case, if you don't explicitely export a port to the host/outside of the cluster, it stay restricted to the virtuals networks of docker to allow cross container communication.

Related

Should I include localhost when forwarding ports in Docker?

Whenever I want to forward ports in a Docker container, I used a simple -p 8080:8080 command.
Now, I read in a couple of places (here and here), that this is possibly insecure, and that I should include the localhost loopback, like this: -p 127.0.0.1:8080:8080.
Could someone shed more light on this?
When should this be done and what is the actual security impact?
When you don't specify an ip address when publishing ports, the published ports are available on all interfaces. That is, if you run docker run -p 8080:8080 ..., then other systems on your network can access the service on port 8080 on your machine (and if your machine has a publicly routable address, then systems elsewhere in the world can access the service as well). (Of course, you may have host- or network- level firewall rules that prevent this access in any case.)
When you specify an ip address in the port publishing specification, like 127.0.0.1:8080:8080, then the listening ports are bound explicitly to that interface.
If your listening ports are bound only to the loopback interface, 127.0.0.1, then only clients on your local machine will be able to connect -- from the perspective of devices elsewhere on the network, those ports aren't available.
Which configuration makes sense depends (a) on what you want to do (maybe you want to expose a service that will be accessible to systems other than your local machine), (b) what your local network looks like, and (c) your level of risk aversion.

How does docker-engine handle outgoing and incoming traffic from/to multiple containers?

I currently have about 5 webserver running behind a reverse proxy. I would like to use an external AD to authentificate my users with the ldap protocol. would docker-engine be able to differentiate between each container by itself ?
My current understanding is that it wouldn't be possible without having a containerized directory service or without exposing different port for each container but I'm having doubts. If I ping an external server from my container I'm able to get a reply in that same container without issue. how was the reply able to reach the proper container ?. I'm having trouble understanding how it would be different for any other protocol but then at the same time a reverse proxy is required for serving the content of multiple webservers. If anyone could make it a bit clearer for me I'd greatly appreciate it.
After digging a bit deeper I have found what I was looking for.
Any traffic originating from a container will get routed automatically by docker on a default network with the use of IP masquerading (similar to NAT) through iptables. The way it works is that the packets from the container will get stripped of the container IP address and replaced by the host ip address. The original ip address will be remembered until the tcp session is over. Then the traffic will go to the destination and any reply will be sent back to the host. the reply packets will get stripped of the host ip and sent to the proper container. This is why you can ping another server from a container and get a reply in that same container.
But obviously it doesn't work for incoming traffic to a webserver because the first step is the client starting a session with the webserver. That's why a reverse proxy is required.
I may be missing a few things and may be mistaken about some others but this is the general idea.
TLDR: outgoing traffic (and any reply ) will get routed automatically by docker, you will have to use a reverse proxy to route incoming traffic to multiple container.

How to access the docker embedded dns from outside the docker network

I run a dnsmasq-service in my docker network, which does -- by intent -- not forward queries towards the docker embedded dns but serves as a resolver for openvpn clients that are connected to an openvpn server in the same docker network.
Those openvpn clients require to talk to a third, proprietary service in the same docker network that they should discover by name. So I tried to add a route to the clients to 127.0.0.11 so that they can resolve the service name via the embedded dns. But it refuses to answer to them, what I saw using tcpdump. I assume that is because the embedded dns is not meant to serve other networks than the specific docker network.
I really want to make the docker embedded dns answer one of my ovpn-clients directly.
Is that somehow possible?
Thanks

Docker app not available with host IP when using automatic port mapping

I am deploying a eureka server on a VM(say host external IP is a.b.c.d) as a docker image. Trying this in 2 ways.
1.I am running the docker image without explicit port mapping : docker run -p 8671 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:32769->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:32769 , its not available.
2.I am running the docker image with explicit port mapping : docker run -p 8761:8761 test/eureka-server
Then running docker ps command shows the port mapping as : 0.0.0.0:8761->8761/tcp
Try accessing the eureka server from outside of the VM with http://a.b.c.d:8761 , its available.
Why in the first case the eureka server is not available from out side the host machine even if there is a random port(32769) assigned by docker.
Is it necessary to have explicit port mapping to have docker app available from external network ?
Since you're looking for access from the outside world to the host via the mapped port you'll need to ensure that the source traffic is allowed to reach that port on the host and given protocol. I'm not a network security specialist, but I'd suggest that opening up an entire range of ports simply because you don't know which port docker will pick would be a bad idea. If you can, I'd say pick a port and explicitly map it and ensure the firewall allows access to that port from the appropriate source address(es) e.g. ALLOW TCP/8671 in from 10.0.1.2/32 as an example - obviously your specific address range will vary on your network configuration. Docker compose may help you keep this consistent (as will other orchestration technologies like Kubernetes). In addition if you use cloud hosting services like AWS you may be able to leverage VPC security groups to help you whitelist source traffic to the port without knowing all possible source IP addresses ahead of time.
You either have the firewall blocking this port, or from wherever you are making the requests, for certain ports your outgoing traffic is disabled, so your requests never leave your machine.
Some companies do this. They leave port 80, 443, and couple of more for their intranet, and disable all other destination ports.

How to host more than 65536 services, each requiring a distinct port?

I want to host web services (say a simple nodejs api service)
There is a limitation on the number of services that I can host on a single host, since the number of ports available on a host is only 65536.
I can think of having a virtual sub-network that is visible only within the host and then have a proxy server that sits on the host and routes the APIs to the appropriate web-service.
Is it possible to do this with dockers - where each service is deployed in a container, a proxy server routing the APIs to the appropriate container?
Is there any off the shelf solution for this (preferably free of cost).
First of all, I doubt you can run 65536 processes per host, unless it's huge. Anyway, I would not recommend that because of availability and performance. Too many processes will be competing for the same resources, leading to a lot of context switches. That said, it's doable.
If your services are HTTP you can use a reverse proxy, like nginx or traefik. If not, you can use HAProxy for TCP services. Traefik is a better option because it performs service discovery, so you won't need to configure the endpoints manually.
In this setup the networking should be bridge, which is the default in Docker. Every container will have its own IP address, so you won't have any problem regarding port exhaustion.

Resources