Can I forbid all outgoing traffic from a docker container except a http proxy server, without sophisticated configuration of iptable?
I don't want this container to access any network at all, exception AAA.BBB.CCC.DDD:80. Is there any convenient way to achieve so?
EDIT:
I found that using --internal can do the trick, and link it to a proxy server container on the same host would allow traffic. Is this method secure though?
Related
I have two containers, a client container and a proxy container. My goal is to get all of the client's outgoing network traffic (TCP and UDP) to be sent to the proxy container. The proxy container has a local socket that receives all traffic from the client, does some processing on it, then forwards the traffic to its original destination (using a new socket).
I have been able to implement this with real hardware (using two Raspberry Pis), but I'm trying to get this working on Docker now.
Currently, I'm trying to do this by creating two networks, an internal and an external network. The client is connected to the internal network and the proxy is connected to both the internal and external network. I then set the default route for the client to send all traffic the proxy. On the proxy, I have IP tables routes that should be sending content to a local proxy running on the system (using these instructions: https://www.kernel.org/doc/html/latest/networking/tproxy.html). Unfortunately, no connections are made to the proxy socket.
I'm hoping someone can point me in the right direction for getting this to work. I'm happy to describe more about what I've tried, but I worry that might just confuse the issue.
I currently have about 5 webserver running behind a reverse proxy. I would like to use an external AD to authentificate my users with the ldap protocol. would docker-engine be able to differentiate between each container by itself ?
My current understanding is that it wouldn't be possible without having a containerized directory service or without exposing different port for each container but I'm having doubts. If I ping an external server from my container I'm able to get a reply in that same container without issue. how was the reply able to reach the proper container ?. I'm having trouble understanding how it would be different for any other protocol but then at the same time a reverse proxy is required for serving the content of multiple webservers. If anyone could make it a bit clearer for me I'd greatly appreciate it.
After digging a bit deeper I have found what I was looking for.
Any traffic originating from a container will get routed automatically by docker on a default network with the use of IP masquerading (similar to NAT) through iptables. The way it works is that the packets from the container will get stripped of the container IP address and replaced by the host ip address. The original ip address will be remembered until the tcp session is over. Then the traffic will go to the destination and any reply will be sent back to the host. the reply packets will get stripped of the host ip and sent to the proper container. This is why you can ping another server from a container and get a reply in that same container.
But obviously it doesn't work for incoming traffic to a webserver because the first step is the client starting a session with the webserver. That's why a reverse proxy is required.
I may be missing a few things and may be mistaken about some others but this is the general idea.
TLDR: outgoing traffic (and any reply ) will get routed automatically by docker, you will have to use a reverse proxy to route incoming traffic to multiple container.
I have three tomcat containers running on different bridge networks with different subnet and gateway
For example:
container1 172.16.0.1 bridge1
container2 192.168.0.1 bridge2
container3 192.168.10.1 bridge3
These containers are running on different ports like 8081, 8082, 8083
Is there any way to run all three containers in same 8081?
If it is possible, how can I do it in docker.
You need to set-up a reverse proxy. As the name suggests, this is a proxy that works in an opposite way from the standard proxy. While standard proxy gets requests from internal network and serves them from external networks (internet), the reverse proxy gets requests from external network and serves them by fetching information from internal network.
There are multiple applications that can serve as a reverse proxy, but the most used are:
NginX
Apache
HAProxy mainly as a load-balancer
Envoy
Traefik
Majority of the reveres proxies can run as another container on your docker. Some of this tools are easy to start since there is ample amount of tutorials.
The reverse proxy is more than just exposing single port and forwarding traffic to back-end ports. The reverse proxy can manage and distribute the load (load balancing), can change the URI that is arriving from the client to a URI that the back-end understands (URL rewriting), can change the response form the back-end (content rewriting), etc.
Reverse HTTP/HTTP traffic
What you need to do to set a reverse proxy, assuming you have HTTP services, in your example is foloowing:
Decide which tool to use. As a beginner, I suggest NginX
Create a configuration file for the proxy which will take the requests from the port 80 and distribute to ports 8081, 8082, 8083. Since the containers are on different network, you will need to decide if you want to forward the traffic to their IP addresses (which I don't recommend since IP can change), or to publish the ports on the host and use the host IP. Another alternative is to run all of them on the same network.
Depending on the case, you need to setup the X-Forwarding-* flags and/or URL rewriting and content rewriting.
Run the container and publish the port 80 as 8080 (if you expose the containers on host, your 8081 will be already taken).
Reverse TCP/UDP traffic
If you have non-HTTP services (raw TCP or UDP services), then you can use HAProxy. Steps are same apart from the configuration step #2. The configuration is different due to non-HTTP nature of the traffic and you can find example in this SO
We use Docker containers to deploy multiple small applications on our servers that are reachable on the public internet. Some of the services need to communicate to each other, but are deployed on different servers, due to different hardware requirements (the servers are on different network and different IP).
Q: What would be the best way to configure blocking of incoming requests to SERVER:PORT except for some allowed IPs and at the same time allow all outgoing connections of the Docker containers?
Two major things we played with and tried out to get them working:
Bound Docker port mappings to 127.0.0.1 and route every traffic through an nginx. This is really config heavy and some infrastructure components aren't possible to proxy via http(s), so we need to add them to nginx.conf stream-server block and therefore open a port on the server (that is accessible by everyone).
Use iptables to restrict access to the published ports. So something like this: iptables -A INPUT -I DOCKER-USER -p tcp -i eth0 -j DROP. But this also have 2 major downfalls. First it seems that it's quite hard to allow multiple IP adresses in such a construct and on the other hand this approach seems to block our docker outgoing connections (to the internet) as well. E. g.: After we activated it a ping google.com from within a docker container was rejected.
Not sure I get this. In term of design, what is available to the external world is in a DMZ or published through an API gateway.
Your docker swarm/kubernetes cluster shall not be accessible directly through the internet or only the API gateway or the application on the DMZ.
So quite likely your docker server shall not be accessible directly. And even if that is the case, if you don't explicitely export a port to the host/outside of the cluster, it stay restricted to the virtuals networks of docker to allow cross container communication.
I have some docker containers talking together through docker bridge networks. They cannot be accessed from outside (I was said) as they are launched from a script with a default command which does not include 'expose' nor '-p' option. I cannot change that script.
I would like to connect to one of this containers which runs a server and listens for requests on port 8080. I tried connecting that bridge to a newly created docker bridge network, but i did not succede.
Now I am thinking of creating a new container and letting it talk to the server one (through bridge networks). As it is a new contaienr I can use the 'expose' or '-p' options, so it would be able to talk to the host machine.
Is it a good idea? How can I forward every request made to that container to the server one and get responses back to the host machine then?
Thanks
Within the default docker network, all ports are exposed. So you only need a container that exposes a port to the host machine and is in the same network as the other containers you have already created.
This is a relatively normal pattern. You can use a reverse proxy like nginx to achieve something like this.
There are some containers that automate this process:
https://github.com/jwilder/nginx-proxy
If you have no control over the other containers though, you will need to write the proxy config by hand.
If the container to which you are trying to connect is an http server, you may be able to use a ready-made container image that can work as an http forwarder (e.g., nginx - it is relatively easy to configure it as an http forwarder).
If you need plain tcp forwarding, you could make a container running 'socat' (socat can work as a tcp forwarder).
NOTE: in either case, you will be exposing a listener that wasn't meant to be on a public address. Do take measures not to allow unauthorized connections.