docker routing no iptables - docker

I have a docker nginx image running with the command
sudo docker run -p 8002:80 nginx
What I am wondering is how does the routing work to go from hostmachine:8002 to get to the container listening on port 80. Usually there are iptables that very explicitly do that, but if you disable iptables it still works. Then I noticed that there is a docker-proxy process listening on each exposed port that I would assume does the proxy/nat. So I disabled the userland proxy with --userland-proxy=false. After doing that I now only see one process docker-current, still listening on all exposed ports. I can only assume that the docker-current process is doing the nat, but that makes me wondering why the userland proxy and/or iptables are ever there? And is there a way that I can see/prove to myself where the nating is happening (ie turn something on/off and not be able to curl my nginx container and then be able to)?

Just for completeness I will answer my own questions.
The iptables rules are there to hopefully save time by nat'ing in kernel space rather than sending it up to userspace to do the nat'ing. You can kill the process that is listening on the port and remove the iptables nat rules and not get a response from the traffic, which is how I (among other tests) was able to prove to myself what was going on.

Related

DDev Ports are not available

I have a DDev project in WSL2. Whenever I try to start it I get an error:
Error response from daemon: Ports are not available: exposing port TCP 127.0.0.1:443 -> 0.0.0.0:0: listen tcp 127.0.0.1:443: bind: Only one usage of each socket address (protocol/network address/port) is normally permitted.'
Sometimes it's also port 80. But most importantly before starting the project none of those ports is occupied. Neither inside WSL nor on the Windows Host. I am also able to start another docker container exposing on those ports. I am even to manually start the router with
COMPOSE_PROJECT_NAME=ddev-project docker-compose -f /home/crs/.ddev/.router-compose-full.yaml -p ddev-router up -d
but I still can't access the project even though the router seems to be running.
ddev debug test also fails.
I tried updating and reinstalling both Docker Desktop and ddev.
I also tried changing the router_http_port and router_https_port to something else. Then it does seem to start the project but I still can't access anything through the ddev router.
The web containers seem to work fine, when not going through the router I can access the project.
Debugging for this is explained in the docs, but it's slightly trickier on WSL2, because the process that's giving trouble may be either on the Windows side or the WSL2 side.
As explained there, you can either find the competing process or change to use different ports in DDEV.
On WSL2, port 80 is often apache2, which some distros have by default, so you can stop it or uninstall it without any harm. Port 443 is something occupied by random poorly behaved processes on Windows, including sometimes virus checkers.
If you use the techniques there to check for competing ports you'll almost certainly solve this.
Another technique is to use curl localhost, curl -I localhost or curl https://localhost and curl -I https://localhost to see if the HTTP response gives you a clue what process is problematic.
Also note that sometimes Docker Desktop is poorly behaved if you're using it, and you may have to restart it.
But if changing the ports to, say, 8080 and 8443 didn't solve it for you then you have a connectivity problem, likely a firewall. That's a completely different problem and you'll want to walk through the troubleshooting instructions in docs and start with temporarily turning off firewall and VPN.
For more interactive help, join us in the DDEV Discord.

Docker redirect port inside container or multiple containers with same port and network_mode

I'm looking for a way to either redirect ports within a container (Not using Docker with '-p') or use multiple containers with same port with network_mode.
Background:
I have a service (VPN) inside a container that provides a central gateway to another network. Now I want to use "network_mode: 'container:vpn'" to attach additional 'sub'-containers to the VPN container so that they also use the corresponding VPN. This also works. To be able to access services I have to pass ports of the sub-containers to the host, which has to be done via the VPN container (works also). But here I have a problem, if several sub-containers publish the same port, I do not know how to map them, because for example the port 8000 is used multiple times.
The port in the original images I can't adjust because the applications need this internally or can not allocate it differently. Now I had the idea to use the containers as base image and to create a shadow image in which the ports are redirected by iptables (iptables -t nat -A PREROUTING -p tcp --dport 8000 -j REDIRECT --to-port 8020). However, this doesn't seem to work because iptables can't be used in a container (only in privileged mode which I don't want).
I wonder how to solve this problem?
Maybe someone has an idea what methods/options there are to solve the problem.
Ideally, I would like to continue using different docker-compose files for every service.
thx
Install socat in the docker image you want to do the port forwarding
for example add this to your Dockerfile
RUN apt-get install -y socat
The command to install socat could be different for other os variants.
Once socat is installed, just call it.
In this example it redirects from 7545 to 8545 in the same container
socat TCP4-LISTEN:7545 TCP4:127.0.0.1:8545

Enable forwarding from Docker containers to the outside world

I've been wondering why docker installation does not enable by default port forwarding to containers.
To save you a click, what I mean is:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I assume it is some sort of security risk, but I just wonder what the risk it is.
Basically I want to create some piece of code that enables this by default, but I want to know what is the bad that can happen.
I googled this and couldn't find anything.
Generally FORWARD ACCEPT seems to be considered too permissive (?)
If so, what can I change to make this more secure?
My network is rather simple, it is a bunch of pcs in a local lan (10.0.0.0/24) with an openvpn server and those pcs may deploy docker hosts (I'm doing this by hand, not using docker compose or swarm or anything because nodes change) that need to see each other. So no real outside access. Another detail is that I am not using network overlay which I could do without swarm, but the writer of the post warns it could be deprecated soon, so also wonder if I should just start using docker-swarm straight away.
EDIT: My question here is maybe more theoretical I guess than what it may seem at first. I want to know why they decided not to do this. I pretty much need/want full communication between docker instances, they need to be ssh'd into and open up a bunch of different ports to talk to each other (and this is the limitation of my networking knowledge, I don't know how this really works, I suppose they are all high ports, but are those also blocked by docker?). I am not sure docker-swarm would help me much here either. They aimed at micro-services I maybe need interactive sessions from time to time, but this is probably asking too much in a single question.
Maybe the simplest version of this question is: "if I put that code up there as a script to load each time my computer boots up, how can someone abuse it".
Each docker container runs on a local bridge network with IPs generally in the range of 172.1x.xx.xx. You can get the ip address running:
docker inspect <container name> | jq -r ".[].NetworkSettings.Networks[].IPAddress"
You should either run your container exposing and publishing the specific container ports on the host running the containers.
Alternatively, you can use iptables to redirect traffic to a specific port from outside:
iptables -t nat -I PREROUTING -i <incoming interface> -p tcp -m tcp --dport <host listening port> --j DNAT --to-destination <container ip address>:<container port>
Change tcp for udp if the port is listening on a udp socket.
If you want to redirect all traffic you can still use the same approach, but may need to specify a secondary ip address on your host (e.g., 192.168.1.x) and redirect any traffic coming to that address to your container.

how to block external access to docker container linux centos 7

I have a mongodb docker container I only want to have access to it from inside of my server, not out side. even I blocked the port 27017/tcp with firewall-cmd but it seems that docker is still available to public.
I am using linux centos 7
and docker-compose for setting up docker
I resolved the same problem adding an iptables rule that blocks 27017 port on public interface (eth0) at the top of chain DOCKER:
iptables -I DOCKER 1 -i eth0 -p tcp --dport 27017 -j DROP
Set the rule after docker startup
Another thing to do is to use non-default port for mongod, modify docker-compose.yml (remember to add --port=XXX in command directive)
For better security I suggest to put your server behind an external firewall
If you have your application in one container and MongoDb in other container what you need to do is to connect them together by using a network that is set to be internal.
See Documentation:
Internal
By default, Docker also connects a bridge network to it to provide
external connectivity. If you want to create an externally isolated
overlay network, you can set this option to true.
See also this question
Here's the tutorial on networking (not including internal but good for understanding)
You may also limit traffic on MongoDb by Configuring Linux iptables Firewall for MongoDB
for creating private networks use some IPs from these ranges:
10.0.0.0 – 10.255.255.255
172.16.0.0 – 172.31.255.255
192.168.0.0 – 192.168.255.255
more read on Wikipedia
You may connect a container to more than one network so typically an application container is connected to the outside world network (external) and internal network. The application communicates with database on internal network and returns some data to the client via external network. Database is connected only to the internal network so it is not seen from the outside (internet)
I found a post here may help enter link description here. Just post it here for people who needed it in future.
For security concern we need both hardware firewall and OS firewall enabled and configured properly. I found that firewall protection is ineffective for ports opened in docker container listened on 0.0.0.0 though firewalld service was enabled at that time.
My situation is :
A server with Centos 7.9 and Docker version 20.10.17 installed
A docker container was running with port 3000 opened on 0.0.0.0
The firewalld service had started with the command systemctl start firewalld
Only ports 22 should be allow access outside the server as the firewall configured.
It was expected that no one others could access port 3000 on that server, but the testing result was opposite. Port 3000 on that server was accessed successfully from any other servers. Thanks to the blog post, I have had my server under firewall protected.

What are docker-proxy processes for

When I look at all running processes on my Linux machine, there are quite a few docker-proxy processes. It seems like every running container (port) results in one docker-proxy!
Problem is I cannot find any documentation which processes docker actually starts and how their relationship/usage is.
Does anyone know if there is any documentation on that?
A full explanation of the docker-proxy is available here.
The summary is that the proxy is used to handle connections originating from the local machine that might otherwise not pass through the iptables rules that Docker configures to handle port forwarding, or when Docker has been configured such that it does not manipulate iptables at all.

Resources