container port mapping concept confusion - docker

I know I can map host port to container port in Docker command or in Dockerfile or in docker-compose.yml. I have no problem there, I know how to do that too.
For example, I have the following container:
$ docker container ls
ID COMMAND PORTS
84.. "python app.py" 0.0.0.0:5000->5000/tcp
I know it means the host port 5000 is mapped to container port 5000.
My question is only on the 0.0.0.0 part. I have done some study, it is said that 0.0.0.0:5000 means map port 5000 of all interfaces on host.
I understand the 5000 port on host, but I don't get "all interfaces on host", what does it mean exactly? Could someone please elaborate for me? Does it mean all network interfaces on the host? What "all interface" this "0.0.0.0" refers to exactly?

Your physical hardware can have more than one network interface. In this day and age you likely have a wireless Ethernet connection, but you could also have a wired Ethernet connection, or more than one of them, or some kind of other network connection. On a Linux host if you run ifconfig you will likely have at least two interfaces, your "real" network connection and a special "loopback" connection that only reaches the host. (And this is true inside a container as well, except that the "loopback" interface only reaches the container.)
When you set up a network listener, using the low-level bind(2) call or any higher-level wrapper, you specify not just the port you're listening on but also the specific IP address. If you listen on 127.0.0.1, your process will be only reachable from the loopback interface, but not off-box. If you have, say, two network connections where one connects to an external network and one an internal one, you can specify the IP address of the internal network and have a service that's not accessible from the outside world.
This is where 0.0.0.0 comes in. It's possible to write code that scans all of the network interfaces and separately listens to all of them, but 0.0.0.0 is a shorthand that means "all interfaces".
In Docker, this comes up in three ways:
The default -p listen address is 0.0.0.0. On a typical developer system, you might want to explicitly specify -p 127.0.0.1:8080:8080 to only have your service accessible from the physical host.
If you do have a multi-homed system, you can use -p 10.20.30.40:80:8080 to publish a port on only one network interface.
Within a container, the main container process generally must listen to 0.0.0.0. Since each container has its own private localhost, listening on 127.0.0.1 (a frequent default for development servers) means the process won't be accessible from other containers or via docker run -p.

Related

Should I include localhost when forwarding ports in Docker?

Whenever I want to forward ports in a Docker container, I used a simple -p 8080:8080 command.
Now, I read in a couple of places (here and here), that this is possibly insecure, and that I should include the localhost loopback, like this: -p 127.0.0.1:8080:8080.
Could someone shed more light on this?
When should this be done and what is the actual security impact?
When you don't specify an ip address when publishing ports, the published ports are available on all interfaces. That is, if you run docker run -p 8080:8080 ..., then other systems on your network can access the service on port 8080 on your machine (and if your machine has a publicly routable address, then systems elsewhere in the world can access the service as well). (Of course, you may have host- or network- level firewall rules that prevent this access in any case.)
When you specify an ip address in the port publishing specification, like 127.0.0.1:8080:8080, then the listening ports are bound explicitly to that interface.
If your listening ports are bound only to the loopback interface, 127.0.0.1, then only clients on your local machine will be able to connect -- from the perspective of devices elsewhere on the network, those ports aren't available.
Which configuration makes sense depends (a) on what you want to do (maybe you want to expose a service that will be accessible to systems other than your local machine), (b) what your local network looks like, and (c) your level of risk aversion.

Dockerized Telnet over SSH Reverse Tunnel

I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)

Can two docker both set host network mode?

I know docker host network mode, which will let docker share the same network with host machine. It will not need NAT and you can visit the docker by the host ip adress.
My question is if start two docker both with host network mode, what will happened? I found that their IP addresses are the same, will their networks conflict?
Setting host networking generally disables Docker networking. It's almost never necessary, unless you have a program that can't be configured to listen on a fixed port or you have a program that listens on thousands of ports.
Since it disables Docker networking, containers that use host networking have direct access to the host network devices. If they set up network listeners, these share a port space with other host-network containers and non-container processes. You cannot remap ports, limit a port to being visible only to specific interfaces, or directly communicate with other containers if you have the host network. Containers don't have their own private IP address or port space in host-network mode.
Nothing stops you from starting multiple containers with host networking (in the same way you can start multiple non-container servers on the host directly), but if they try to listen to the same port on the same (host) interface(s), one of them will fail and you'll have to do application-specific reconfiguration to fix it.

If docker container ip and external network ip is same, then which one will get respond if will do telnet? any idea

If docker container IP and external network IP is the same, then which one will get respond if it will do telnet?
I know below configuration is the worst configuration, but I want to know the behaviour.
Give you one example
My application is running in localhost which is talking to the database inside the docker container.
Custom IP we provided - (IP : 10.0.0.1, PORT :5432)
Another database running outside the container, let say both container (IP and port) and host (IP and port) are the same.
HOST IP : 10.0.0.1, HOST PORT :5432
Which one will connect by application host/container-database or both the database?
or
If will do the telnet 10.0.0.1 5432? which one will respond and why?
Explain in Diagram
I don't think that's possible, even if you have the same IP (somehow) for the container and the host, you won't be able to map the container port 5432 to the host port 5432, because there's already an application (host dB) running on that port.
Consider a scenario where you are using the host network for the container as well, probably by using the --network host. This way your container IP will be the same as the host IP. The container will be using the 5432 port of the host to run the dB. Now, if you try to start the dB on the host using the same port, you should get an error that port is already being used.

Can't set udp source port in docker

I am using Docker 18.06.1-ce-win73 on windows 10 and trying to perform the following udp operation:
Docker port 10001 --------------> host port 10620
It is mandatory for the application running on the host to receive packets from the port 10001.
Inside the docker container, using python I bind on the IP ('0.0.0.0', 10001) and use the socket to send my packets to the host IP on port 16020.
I have also started the container with the argument -p 10001:10001/udp.
Unfortunately, when receiving the packet on the Host application, the origin port is not 10001 but a random one.
Is it possible to force docker to use a specific source port when using UDP from inside the container ?
You can control the container source port, but when you communicate outside of docker, even to your host, the request will go through a NAT layer that will change the source to be the host with a random port. You may be able to modify the iptables rules to work around this NAT effect.
However, if you really need control of the source port like this, you may be better off switching to host networking (--net=host or network_mode: host depending on how you run your containers), or change to a networking driver like macvlan that exposes the container directly without going through the NAT rules.

Resources