How to turn on UDP hole punching in docker? - docker

By default docker appears to not do UDP hole punching.
Is there anyway to turn this on? Or is this not supported at all?
Note:
UDP hole punching is different from the port forwarding configured with the -p option. It means a device can respond to a UDP packet originating from your docker image using the source IP Address and port in the packet received and the NAT maps it back to the correct image and port. This is a feature most routers support by default.
Maybe I should explain why I want this instead of -p forwarding built into docker. We know the IP Address of the devices we want to talk to from our docker image when we send our UDP packet to that device if we use the -p forwarding then the reply packet gets forwarded to us but both the from address and port are changed by the docker NAT. This means that when we receive that packet we don't actually know who it is from. This might be OK if you are only talking to one device, however we can be talking to many, so when you get a packet from a different address then what we sent to, we have no way to know who that packet is actually from. We are hoping with UDP hole punching the from address would remain intact.

Related

TCP client doesn't receive error connecting on non listening port

I have an Azure Container App running and is listening on a public TCP port 8000 (via the load balancer) for incoming connections. When incoming connections are arriving, I serve them with data and everything goes as expected.
My problem is when I stop the server listening on that port. In that case, a client application trying to connect to my public IP address at port 8000 would expect to get an error like 'Could not connect' but this is not happening. What is in fact happening is that the Container Apps environment seem to be forwarding the data no mater what to that port (even if there is no server listening). As such, the client connecting to that port can't understand that the server that should be listening to that port is really stopped (in order to resend the data at a later time).
Example:
Open a TCP client (eg. PacketSender) and try to send some data to port 6000 on your localhost. You should receive a 'Could not connect' error message.
Now, in docker run the following:
docker run -p 6000:6000 nginxdemos/hello:plain-text
Try again to send some data to port 6000 via a TCP client. This time the data will be sent even though the nginxdemos container doesn't listen to port 6000 (but probably on 80).
Is it any way that I can somehow solve that issue on the server side and ensure that the clients can't connect if the server is stopped? I have devices sending thousands of data on a Container App but because they do not expect any kind of an ACK, they think that the data have been transmitted (even though they haven't) and they don't try to resend them.
Not sure about the docker example, it probably depends on how docker on that system implements port forwarding.
In Azure ContainerApps: no, this is not possible. There is always some component listening on the port, even if your application is not running or is restarting, provisioning, scaling, etc. The connection will be buffered until the app starts listening on the port or it times out.

How does docker-engine handle outgoing and incoming traffic from/to multiple containers?

I currently have about 5 webserver running behind a reverse proxy. I would like to use an external AD to authentificate my users with the ldap protocol. would docker-engine be able to differentiate between each container by itself ?
My current understanding is that it wouldn't be possible without having a containerized directory service or without exposing different port for each container but I'm having doubts. If I ping an external server from my container I'm able to get a reply in that same container without issue. how was the reply able to reach the proper container ?. I'm having trouble understanding how it would be different for any other protocol but then at the same time a reverse proxy is required for serving the content of multiple webservers. If anyone could make it a bit clearer for me I'd greatly appreciate it.
After digging a bit deeper I have found what I was looking for.
Any traffic originating from a container will get routed automatically by docker on a default network with the use of IP masquerading (similar to NAT) through iptables. The way it works is that the packets from the container will get stripped of the container IP address and replaced by the host ip address. The original ip address will be remembered until the tcp session is over. Then the traffic will go to the destination and any reply will be sent back to the host. the reply packets will get stripped of the host ip and sent to the proper container. This is why you can ping another server from a container and get a reply in that same container.
But obviously it doesn't work for incoming traffic to a webserver because the first step is the client starting a session with the webserver. That's why a reverse proxy is required.
I may be missing a few things and may be mistaken about some others but this is the general idea.
TLDR: outgoing traffic (and any reply ) will get routed automatically by docker, you will have to use a reverse proxy to route incoming traffic to multiple container.

information in tcp packages. How to know destination?

Suppose we have a web browser with several tabs open and we are working with them.
All TCP packets will arrive with destination port 80 but I don't understand how the browser can know, from all the network traffic, which packages are destined for which tab.
What's more, if there are several browsers, I understand that all the packages destined to them come with port 80. How do you know which ones are intended?
Thak you
TCP connections are identified by the following tuple: Source Ip, Destination Ip, Source Port, Destination Port
Each connection that the browser opens might have the same destination ip & port (e.g. www.google.com port 80), but each connection will have a unique source port number.
Suppose we have a web browser with several tabs open and we are working with them. All TCP packets will arrive with destination port 80
No they won't. They will arrive at the browser with source port 80, from the server, but each connection will have a different local port number at the client host.
but I don't understand how the browser can know, from all the network traffic, which packages are destined for which tab.
It doesn't have to know. All it has to do is read from its various connections via their sockets. Demultiplexing to the respective local ports is TCP's job, not the browser's.
What's more, if there are several browsers, I understand that all the packages destined to them come with port 80.
Wrong again. They come with source port 80, and, again, different destination ports.
How do you know which ones are intended?
Same answer. They don't. TCP does.
80 port is used, usually, on the server side. Each browser tab is client, not a server, and uses different port numbers.
Client reads data from this "own local" port, not directly from the server's 80.

NAT (Redirect) outgoing traffic to a specific port

I'm trying to establish a connection with a diameter server. That server has a restriction parameter of "peer port"
Which means source port of my outgoing traffic should be restricted to a specific port.
Since I'm using an erlang diameter client, I didn't find any parameter to specify the outgoing port. It will initiate the connection with a random port to the destination ip:port.
Is there a way to translate my outgoing traffic to that ip locally to a specific port from linux?, so that the external server will see my source port as the allowed port.
You should apply a NAT at the sender side. Read this thread, it explains how to do it with iptables.

Port Forward Directly to a Guest OS with VirtualBox

I am currently using Ubuntu 10.04 for some rails development. It is installed as a guest machine using VirtualBox on a Windows 7 x64 host.
Within Ubuntu, I am trying to port tunnel several ports from a remote server directly to the Guest OS in order to avoid having to download a remote database.
Let's say I want to forward port 5000 on the remote server to port 5000 on the guest os.
I have set up a forwarder for the port on the Windows side, using VBoxManage.exe. This forwards HostPort 5000 to GuestPort 5000.
Then within ubuntu I run, ssh -L5000:127.0.0.1:5000. However, whenever I try to access "127.0.0.1:5000", I receive the message "channel 7: open failed: connect failed: Connection refused"
Am I missing something?
Thanks for the help!
connect failed: Connection refused
This means that you'r not able to connect to 5000 on the remote end.
If you'r only using this connection from within your guest through your SSH tunnel then you don't need the forward from VBoxManager, as this will open op so that outside computers can connect directly to your guest, it won't help your guest connect to the outside.
Are you sure the server you connect (SSH) to is the same server that runs your database? And is the database running on that server?
When you've connected (SSH) to the server, you can try to list what ports are listening for connections or you could try to connect to the database with telnet. To list listeners you can run "netstat -lnt" (-l shows listening, -n is numeric (show IP and port number) and -t is tcp). You should have a line like "tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN" if you have a service listening for TCP on port 5000. To try and connect you can simply do "telnet 127.0.0.1 5000", if you can't connect with telnet from the server then the database ain't listening/allowing your connection, or the server is running on another port or server.
SSH uses TCP traffic by default, right?
Just to verify, NAT in VirtualBox does have these limitations (per the User Manual):
There are four limitations of NAT mode which users should be aware of:
ICMP protocol limitations: Some frequently used network debugging tools (e.g. ping or tracerouting) rely on the ICMP protocol for sending/receiving messages. While ICMP support has been improved with VirtualBox 2.1 (ping should now work), some other tools may not work reliably.
Receiving of UDP broadcasts is not reliable: The guest does not reliably receive broadcasts, since, in order to save resources, it only listens for a certain amount of time after the guest has sent UDP data on a particular port. As a consequence, NetBios name resolution based on broadcasts does not always work (but WINS always works). As a workaround, you can use the numeric IP of the desired server in the \server\share notation.
Protocols such as GRE are unsupported: Protocols other than TCP and UDP are not supported. This means some VPN products (e.g. PPTP from Microsoft) cannot be used. There are other VPN products which use simply TCP and UDP.
Forwarding host ports lower than 1024 impossible: On Unix-based hosts (e.g. Linux, Solaris, Mac OS X) it is not possible to bind to ports below 1024 from applications that are not run by root. As a result, if you try to configure such a port forwarding, the VM will refuse to start.
Try ssh -L5000:0.0.0.0:5000 instead of ssh -L5000:127.0.0.1:5000
There is something called a "loopback" that is tangled up with 127.0.0.1 that will cause you grief if trying to access ports from a different machine. I.e. your host machine.

Resources