Broadcasting UDP from within a Docker container - docker

I've just started experimenting with Docker and have hit an issue I haven't been able to resolve (please note, I'm using boot2docker).
My container has a simple service that listens on 8080/tcp and 5000/udp.
docker run -d -p 0.0.0.0:5000:5000/udp -p 0.0.0.0::8080 test/service
From my macos terminal, I can telnet to 192.168.59.103:8080 and issue simple commands, so TCP is working fine.
Next, I try UDP by issuing the following:
echo "HELLO" | socat - UDP-DATAGRAM:192.168.59.103:5000,broadcast
via Wireshark, I can see the datagram making it to the service and the service tries to echo it back, but receives an ICMP response stating the port is invalid.
So it seems I'm pretty close to having a working test case, but I'm just not sure what I need to configure to allow the broadcast back to the mac terminal that initiated the call.
Any advice would be appreciated.

Related

No connection could be made because the target machine actively refused it with packer for vagrant building with virtualbox

I try to create a Vagrant Box (virtualbox) with Packer into Windows 11 OS.
I use a script which run correctly in the past but I don't know why this time I have SSH issue that cause impossible building.
During the building packer I have this error message :
2022/05/30 10:23:51 packer.exe plugin: [DEBUG] TCP connection to SSH ip/port failed: dial tcp 127.0.0.1:2619: connectex: No connection could be made because the target machine actively refused it.
This TCP port number (2619) is random day by day.
In this example I try to know what application use this port number with those commands :
netstat -an | grep '2619'
netstat -ab | grep '2619'
netstat -aon | grep '2619'
But nothing appear
I use the CurrPorts application to view all TCP/UDP ports in my system. And also I can't view anything about this port number 2619. But I can see warning on packer.exe line like this :
Actually, I use also Docker in my system (not in the same time). But I don't know how parameter port into this.
Why TCP local and remote port does not corresponding to the port number error into packer (2619) ?
Why this port number not appear into netstat commands ?
I understand that connection is forbidden (but access ok) but I don't know why and how authorize this...
Here my config docker desktop :
How authorize or manage ports for using docker and packer in the same OS (not at the same time) ?
No Docker Desktop launched
disabled firewall Windows
try with uninstalled docker desktop
I don't know if this message into linux build server dealing with this issue but :
After testing lot of thing, the unique problem is :
Microsoft Windows 11 update (KB5012592)
For resolving that you need to :
stop using Hyper-V component windows (I never do something other. It's just this in my context)
More info here :
microsoft forum
Oracle VirtualBox issue
Docker does not create it's own port, it must be a given with the -p command.
Like, docker run -d -p 8080:80 -
Map TCP port 80 in the container to port 8080 on the Docker host.
About the netstat -aon | grep '2619.
I am pretty sure that windows are using findstr command and not grep.
After using findstr you should get a PID of the process which you can use taskkill
See more here :
https://stackoverflow.com/a/39633428/10814912

Unable to receive data from socket on Docker Windows

I have a webserver listening on some port. I dockerize this server and publish its port with the command:
docker run -p 8080:8080 image-tag
Now I write a short Java client socket connecting to localhost on this port (it is able to connect). However, when I read data from this socket via the readLine function, it always returns me null. It shouldn't. Can someone point me some direction on how to troubleshoot this? Things I have tried:
This webserver and client works fine without docker.
Using my docker installation, I'm able to pull the getting-started app and it works fine. (means there is no problem with my docker, it still can publish port)
My docker pulls only the openjdk:latest as the base image. Other than that, nothing special.
The docker is Linux Docker on Windows Host.
The port the webserver is running on is correct and the same as the published port.
I would be very happy if someone could help.
By making the server app inside container listen on address 0.0.0.0 instead of localhost, I'm able to solve the problem.

SSH Tunnel within docker container

I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.

Docker: able to telnet to remote machines from host but not from container

We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.

Can't log to graylog2 docker container via HTTP endpoint

I have a running graylog2 docker container on a remote machine with ports 3000 and 12900 exposed (3000 routes to port 9000 within docker) and I can open graylog web UI on that port. So that works as expected. But for some reason I can't add logs from outside the container. Running this from the cli WORKS from INSIDE the container, but DOESN'T WORK from OUTSIDE:
curl -XPOST http://localhost:3000/gelf -p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}'
Running this command from outside the docker container I get:
{"type":"ApiError","message":"HTTP 404 Not Found"}
Edit: Found some information that this could possibly be solved by setting GRAYLOG_REST_TRANSPORT_URI to a public IP when running the docker container. Unfortunately when I start it like that, I run into another problem - can't start any inputs to receive logs. Bind address: 0.0.0.0 Port: 3000 It throws:
Request to start input 'project' failed. Check your Graylog logs for more information.
Edit2: Moved my testing environment to a local machine, to rule out possible server misconfigurations. Getting same errors and same problems.
Edit3: Decided to test out the graylog1 docker image and with that one everything actually works as expected right off the bat! So as a backup I could use an old version, but I'd rather avoid that if possible.
You have to start a GELF HTTP input to be able to receive GELF messages via HTTP.
The Graylog REST API does not provide this type of input.

Resources