I am using Docker 18.06.1-ce-win73 on windows 10 and trying to perform the following udp operation:
Docker port 10001 --------------> host port 10620
It is mandatory for the application running on the host to receive packets from the port 10001.
Inside the docker container, using python I bind on the IP ('0.0.0.0', 10001) and use the socket to send my packets to the host IP on port 16020.
I have also started the container with the argument -p 10001:10001/udp.
Unfortunately, when receiving the packet on the Host application, the origin port is not 10001 but a random one.
Is it possible to force docker to use a specific source port when using UDP from inside the container ?
You can control the container source port, but when you communicate outside of docker, even to your host, the request will go through a NAT layer that will change the source to be the host with a random port. You may be able to modify the iptables rules to work around this NAT effect.
However, if you really need control of the source port like this, you may be better off switching to host networking (--net=host or network_mode: host depending on how you run your containers), or change to a networking driver like macvlan that exposes the container directly without going through the NAT rules.
Related
I have a docker container that runs a TCP server which is attached to a custom docker network that I have set up. The container is exposed with an external port mapping.
There is also a TCP client outside the container that is trying to access the TCP server through the exposed port. The TCP client binds to a specific source port, say 5000.
My problem is that due to the docker network SNAT, the container running on docker sees the remote port as one that is generated by the network gateway (say 6000), rather than the original source port (5000).
Is there a way to modify the network behavior so it doesn't apply SNAT for these external connections?
I know I can map host port to container port in Docker command or in Dockerfile or in docker-compose.yml. I have no problem there, I know how to do that too.
For example, I have the following container:
$ docker container ls
ID COMMAND PORTS
84.. "python app.py" 0.0.0.0:5000->5000/tcp
I know it means the host port 5000 is mapped to container port 5000.
My question is only on the 0.0.0.0 part. I have done some study, it is said that 0.0.0.0:5000 means map port 5000 of all interfaces on host.
I understand the 5000 port on host, but I don't get "all interfaces on host", what does it mean exactly? Could someone please elaborate for me? Does it mean all network interfaces on the host? What "all interface" this "0.0.0.0" refers to exactly?
Your physical hardware can have more than one network interface. In this day and age you likely have a wireless Ethernet connection, but you could also have a wired Ethernet connection, or more than one of them, or some kind of other network connection. On a Linux host if you run ifconfig you will likely have at least two interfaces, your "real" network connection and a special "loopback" connection that only reaches the host. (And this is true inside a container as well, except that the "loopback" interface only reaches the container.)
When you set up a network listener, using the low-level bind(2) call or any higher-level wrapper, you specify not just the port you're listening on but also the specific IP address. If you listen on 127.0.0.1, your process will be only reachable from the loopback interface, but not off-box. If you have, say, two network connections where one connects to an external network and one an internal one, you can specify the IP address of the internal network and have a service that's not accessible from the outside world.
This is where 0.0.0.0 comes in. It's possible to write code that scans all of the network interfaces and separately listens to all of them, but 0.0.0.0 is a shorthand that means "all interfaces".
In Docker, this comes up in three ways:
The default -p listen address is 0.0.0.0. On a typical developer system, you might want to explicitly specify -p 127.0.0.1:8080:8080 to only have your service accessible from the physical host.
If you do have a multi-homed system, you can use -p 10.20.30.40:80:8080 to publish a port on only one network interface.
Within a container, the main container process generally must listen to 0.0.0.0. Since each container has its own private localhost, listening on 127.0.0.1 (a frequent default for development servers) means the process won't be accessible from other containers or via docker run -p.
I know that the title might be confusing so let me explain.
The is my current situation:
Server A - 127.0.0.1
Server B - 1.2.3.4.5
Server B opens a reverse tunnel to Server A. This gives me a random port on Server A to communicate with the Server B. Let's assume the port is 1337.
As I mentioned to access Server B I am sending packets to 127.0.0.1:1337.
Our client needs a Telnet connection. Since Telnet is insecure but a requirement, we decided to use telnet OVER the ssh reverse tunnel.
Moreover, we created an alpine container with busybox inside of it to eliminate any access to the host. And here is our problem.
The tunnel is created on the host, yet the telnet client is inside a docker container. Those are two separate systems.
I can share my host network with the docker with -network=host but it eliminates the encapsulation idea of the docker container.
Also binding the docker to host like that -p 127.0.0.1:1337:1337 screams that the port is already in use and it can't bind to that (duh ssh is using it)
Mapping ports from host to the container are also not working since the telnet client isn't forwarding the traffic to a specific port so we can't just "sniff" it out.
Does anyone have an idea how to overcome this?
I thought about sharing my host network and trying to configure iptables rules to limit the docker functionality over the network but my iptables skills aren't really great.
The port forward does not work, because that is basically the wrong direction. -p 127.0.0.1:1337:1337 means "take everything thats coming in on that host-port, and forward it into the container". But you want to connect from the container to that port on the host.
Thats basically three steps:
The following steps require atleast Docker v20.04
On the host: Bind your tunnel to the docker0 interface on the host (might require that you figure out the ip of that interface first). In other words, referring to your example, ensure that the local side of the tunnel does not end at 127.0.0.1:1337 but <ip of host interface docker0>:1337
On the host: Add --add-host host.docker.internal:host-gateway to your docker run command
Inside your container: telnet to host.docker.internal (magic DNS name) on the port you bound in step 2 (i.e. 1337)
I'm trying to set up an HTTP server in a Docker container on port 8888 on a Raspbian host. I use -p 8888:8888 to bind the port to all interfaces. This allows me to connect to it with localhost:8888 without issue. However, when I connect to the bound port on the host from another device in the same NAT using its IP address (192.168.1.xxx), my connection is refused.
I'm using the bridge networking mode for this. I tried the "host" mode and that didn't work at all.
You need to link the containers with the (deprecated) —-link command documented here. Otherwise they run in isolated networks. You can also use the more modern and supported way and create a network that each shares; both are described in the linked page.
i want to expose the container ip to the external network where the host is running so that i can directly ping the docker container ip from an external machine.
If i ping the docker container ip from the external machine where the machine hosting the docker and the machine from which i am pinging are in the same network i need to get the response from these machines
Pinging the container's IP (i.e. the IP it shows when you look at docker inspect [CONTAINER]) from another machine does not work. However, the container is reachable via the public IP of its host.
In addition to Borja's answer, you can expose the ports of Docker containers by adding -p [HOST_PORT]:[CONTAINER_PORT] to your docker run command.
E.g. if you want to reach a web server in a Docker container from another machine, you can start it with docker run -d -p 80:80 httpd:alpine. The container's port 80 is then reachable via the host's port 80. Other machines on the same network will then also be able to reach the webserver in this container (depending on Firewall settings etc. of course...)
Since you tagged this as kubernetes:
You cannot directly send packets to individual Docker containers. You need to send them to somewhere else that’s able to route them. In the case of plain Docker, you need to use the docker run -p option to publish a port to the host, and then containers will be reachable via the published port via the host’s IP address or DNS name. In a Kubernetes context, you need to set up a Service that’s able to route traffic to the Pod (or Pods) that are running your container, and you ultimately reach containers via that Service.
The container-internal IP addresses are essentially useless in many contexts. (They cannot be reached from off-host at all; in some environments you can’t even reach them from outside of Docker on the same host.) There are other mechanisms you can use to reach containers (docker run -p from outside Docker, inter-container DNS from within Docker) and you never need to look up these IP addresses at all.
Your question places a heavy emphasis on ping(1). This is a very-low-level debugging tool that uses a network protocol called ICMP. If sending packets using ICMP is actually core to your workflow, you will have difficulty running it in Docker or Kubernetes. I suspect you aren’t actually. Don’t worry so much about being able to directly ping containers; use higher-level tools like curl(1) if you need to verify that a request is reaching its container.
It's pretty easy actually, assuming you have control over the routing tables of your external devices (either directly, or via your LAN's gateway/router). Assuming your containers are using a bridge network of 172.17.0.0/16, you add a static entry for the 172.17.0.0/16 network, with your Docker physical LAN IP as the gateway. You might need to also allow this forwarding in your Docker OS firewall configuration.
After that, you should be able to connect to your docker container using its bridge address (172.17.0.2 for example). Note however that it will likely not respond to pings, due to the container's firewall.
If you're content to access your container using only the bridge IP (and never again use your Docker host IP with the mapped-port), you can remove port mapping from the container entirely.
You need to create a new bridge docker network and attach the container to this network. You should be able to connect by this way.
docker network create -d bridge my-new-bridge-network
or
docker network create --driver=bridge --subnet=192.168.0.0/16 my-new-bridge-network
connect:
docker network connect my-new-bridge-network container1
or
docker network connect --ip 192.168.0.10/16 my-new-bridge-network container-name
If the problem persist, just reload docker daemon, restart the service. Is a known issue.