TFTP timeout inside Docker container - docker

I have a Docker container with tftp client on Host#1 with IP 10.10.10.10.
I have a file and a tftp server on Host#2 with IP 11.11.11.11
I want to be able to download file from Host#2 with this tftp client inside Docker container.
The main problem is that tftp uses port 69 only as a control port, but sends data on ephemeral ports. Thus, I tftp client is able to ask the server to send the file, but then can't receive the file itself and catches timeout.
So how do I download this file with tftp?
I know two solutions for now.
First, using --net=host. I do not want to use it because of security (and not only) concerns. Second, publishing ephemeral ports 49152-65535. It's hard to make it work if anything else on host uses any of these ephemeral ports.
Also, it is everything fine with Firewall!

Related

Why can my Docker app receive UDP data without publishing the port?

I'm learning Docker networking. I'm using Docker Desktop on Windows.
I'm trying to understand the following observations:
Short version in a picture:
Longer version:
First setup (data from container to host)
I have a simple app running in a container. It sends one UDP-datagram to a specific port on the host (using "host.docker.internal")
I have a corresponding app running on the host. It listens to the port and is supposed to receive the UDP-datagram.
That works without publishing any ports in docker (expected behavior!).
Second setup (data from host to container)
I have a simple app on the host. It sends one UDP-datagram to a specific port on the loopback network (using "localhost")
I have a corresponding app running in a container. It listens to the port and is supposed to receives the UDP-datagram.
That works only if the container is run with option -p port:port/udp (expected behavior!).
Third setup (combination of the other two)
I have an app "Requestor" running in a container. It sends a UDP request-message to a specific port on the host and then wants to receive a response-message.
I have a corresponding app "Responder" running on the host. It listens to the port and is supposed to receive the request-message. Then it sends a UDP response-message to the endpoint of the request-message.
This works as well, and - that's what I don't understand - without publishing the port for the response-message!
How does this work? I'm pretty sure there's some basic networking-knowledge that I simply don't have already to explain this. I would be pleased to learn some background on this.
Sidenote:
Since I can do curl www.google.com successfully from inside a container, I realize that a container definitely must not publish ports to receive any data. But there's TCP involved here to establish a connection. UDP on the other hand is "connectionless", so that can't be the (whole) explanation.
After further investigation, NAT seems to be the answer.
According to these explanations, a NAT is involved between the loopback interface and the docker0 bridge.
This is less recognizable with Docker Desktop for Windows because of the following (source):
Because of the way networking is implemented in Docker Desktop for Windows, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.

SSH Tunnel within docker container

I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.

Flask in docker, access other flask server running locally

After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.

How to make a Docker container's service accessible via the container's IP address?

I'm a bit confused. Trying to run both a HTTP server listening on port 8080 and a SSH server listening on port 22 inside a Docker container I managed to accomplish the latter but strangely not the former.
Here is what I want to achieve and how I tried it:
I want to access services running inside a Docker container using the IP address assigned to the container:
ssh user#172.17.0.2
curl http://172.17.0.2:8080
Note: I know this is not how you would configure a real web server but I want the container to mimic an embedded device which runs both services and which I don't have available all the time. (So it's really just a local non-production thing with no security requirements).
I didn't expect integrating the SSH server to be easy, but to my surprise I just installed and started it and had to do nothing else to be able to connect to the machine via ssh (no EXPOSE 22 or --publish).
Now I wanted to access the container via HTTP on port 8080 and fiddled with --publish and EXPOSE but only managed to make the HTTP server available through localhost/127.0.0.1 on the host. So now I can access it via
curl http://127.0.0.1:8080/
but I want to access both services via the same IP address which is NOT localhost (e.g. the address the container got randomly assigned is totally OK for me).
Unfortunately
curl http://172.17.0.2:8080/
waits until it times out every time I tied it.
I tried docker run together with -p 8080, -p 127.0.0.1:8080:8080, -p 172.17.0.2:8080:8080 and much more combinations, together or without EXPOSE 8080 in the Dockerfile but without success.
Why can I access the container via port 22 without having exposed anything?
And how do I make it accessible via the container's IP address?
Update: looks like I'm experiencing exactly what's described here.

Docker network settings and iptables NAT

I have a server running inside a docker container, listening on UDP port, let's say 1234. This port is exposed in Dockerfile.
Also, I have an external server helping with NAT traversal, basically, just sending addresses of the registered server and a client to each other, and allowing to connect to a server by the name it sent during registration.
Now, if I run my container with -P option, my port is getting published as some random port, e.g. 32774. But on the helper server I see my server connected to it from port 1234, and so it can't send a correct address to a client. And a client can't connect at all.
If I run my container explicitly publishing my server on the same port with -p 1234:1234/udp, a client can connect to my server directly. But now on the helper server I see my server connected to it from port 1236, and again it can't send the correct port to a client.
How can this be resolved? My aim is to require as little addition configuration as possible from people who will use my docker image.
EDIT: So, I need either to know my external port number from inside the container to send it to the discovery server, which, as I understand, not possible at the moment, right? Or I need to make outgoing connections from the container and my port to use the same external port as configured for incoming connections - is that possible?
The ports are managed by docker and the docker network adaptor. When using solely -P then the port is exposed docker internally and accessible through docker linking. When using "1234:1234" then the port is mapped on a host port and directly available for a client and also available for linking.
Start the helper server with a link option "--link server container/name". The helper server will connect to host "server" on port 1234. The correct ip address will be managed by docker.
Enable docker to change your iptables configuration, which is docker default. Afterwards the client should be able to connect to both instances. Note that the helper server should provide the host ip and not the docker container ip address. The docker container ip address does only work inside the host where the docker network adapter is running.

Resources