I want to run an application (the OLA server, olad) inside a container under Docker for Mac. (Version 18.06.1-ce-mac73 on Mojave, all up-to-date.) The particular OLA configuration I am using (for the Art-Net protocol) works by sending and receiving UDP broadcast data over port 6454 on a particular physical ethernet interface on the host, which is in turn connected to an external device under control. Normally, when starting the olad server, one specifies the interface or IP address on which it should send/receive the broadcast messages.
My struggle is getting the UDP messages to and from the interface from inside the container. I don't appear to have access to that physical interface or network inside the Docker for Mac container, even if I run with --network host. My understanding is that this is because of a quirk of the way Docker for Mac is implemented, with an extra VM between my container and the hardware. That VM sees the hardware, but I don't.
Simply running the docker instance with -p 6454:6454/udp doesn't work, either, maybe unsurprisingly. I could see where that might allow incoming traffic to the container to find its way to the server, but the server inside still can't find the outside network/device in the other direction. And I'm not sure how OSX would necessarily get that data from the interface to the docker bridge anyway.
How can I get direct, bidirectional access to that interface or network from inside the container? Or if I cannot, is there some kind of workaround, maybe via socat where I could tunnel that network in through a Unix socket that is shareable between host and container?
Related
I'm learning Docker networking. I'm using Docker Desktop on Windows.
I'm trying to understand the following observations:
Short version in a picture:
Longer version:
First setup (data from container to host)
I have a simple app running in a container. It sends one UDP-datagram to a specific port on the host (using "host.docker.internal")
I have a corresponding app running on the host. It listens to the port and is supposed to receive the UDP-datagram.
That works without publishing any ports in docker (expected behavior!).
Second setup (data from host to container)
I have a simple app on the host. It sends one UDP-datagram to a specific port on the loopback network (using "localhost")
I have a corresponding app running in a container. It listens to the port and is supposed to receives the UDP-datagram.
That works only if the container is run with option -p port:port/udp (expected behavior!).
Third setup (combination of the other two)
I have an app "Requestor" running in a container. It sends a UDP request-message to a specific port on the host and then wants to receive a response-message.
I have a corresponding app "Responder" running on the host. It listens to the port and is supposed to receive the request-message. Then it sends a UDP response-message to the endpoint of the request-message.
This works as well, and - that's what I don't understand - without publishing the port for the response-message!
How does this work? I'm pretty sure there's some basic networking-knowledge that I simply don't have already to explain this. I would be pleased to learn some background on this.
Sidenote:
Since I can do curl www.google.com successfully from inside a container, I realize that a container definitely must not publish ports to receive any data. But there's TCP involved here to establish a connection. UDP on the other hand is "connectionless", so that can't be the (whole) explanation.
After further investigation, NAT seems to be the answer.
According to these explanations, a NAT is involved between the loopback interface and the docker0 bridge.
This is less recognizable with Docker Desktop for Windows because of the following (source):
Because of the way networking is implemented in Docker Desktop for Windows, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
I have a a docker container with a few images running there. I run them via docker-compose up command. On my device everything works well with localhost but I want to make so that other devices in the same network will be able to access the MQTT broker as well. How do I do that?
Currently, in my code I do this:
ws:localhost:9001
But since this localhost applies only for the device that runs docker, another laptop won't be able to use it. How do I solve that?
You use the LAN IP address of your machine (the one hosting the docker containers) in place of localhost.
We have no way of knowing what that address may be, but it could start with 192.168.x.x or may be 10.x.x.x
By default, Docker has a "bridge" network that will bridge your container to the outside world. Just use the IP address of the computer where your MQTT Broker Container is running, and port 9001, and it should work fine.
If you need to run it on an internal Docker network, you will have to use something like an ADC or TCP Proxy of some sort to allow access to it.
Question:
How can I access specific containers inside a docker swarm network from outside the network?
I don't need to access arbitrary ports, the exposed container ports are fine, but I need to be able to connect to a specific container, not just any container I am routed to via load balancing.
As in, I can currently do:
curl localhost:8582/service_id
And get something like:
1589697532253.0.8570331623512102
But the result varies, because it is load balanced to a different container each time I make the request. I only need this for debugging, I usually want the load balancing behavior, but when there is an issue with a specific container it is essential that I make requests only to that container.
I can do it within a container inside the network, but it is a lot easier to debug from my local machine, instead of inside a container.
Environment:
I am not sure if it is relevant, but I am on windows, running docker desktop, engine v19.03.8.
Things I tried:
I tried tunneling into the docker network with wireguard, however I believe that is a non-starter because my host OS is windows, and I can't find any wireguard images that support non-linux host OSes (and I'm not sure that is even technically possible).
When I run docker network inspect ingress -v I can see there appears to be IPs associated with each container (10.0.0.12, 10.0.0.13) which differ from the IPs on the overlay network (10.0.18.7, 10.0.18.8), but when I try to access my exposed port over any of those IPs, the connection attempt is ignored and does not connect.
I tried adding a specific network route to make sure the packets were going to docker, by forcing all packets in the /24 address range to go through the docker gateway, but that didn't work either (route add -p 10.0.0.0 MASK 255.255.255.0 192.168.8.177 METRIC 1 IF 49).
Any suggestions would be greatly appreciated!
I'm running Docker Desktop for Mac on host, it is running two containers.
Container-1: linux-based OS, running UDP-based server program listening on 14xxx port (udp://:14xxx/).
Container-2: linux-based OS, python application sending/receiving data via UDP address as udp://14xxx/ without any specific hostname.
Question: My python app on Container-2 is able to send on UDP port, but never receives anything back from Container-1.
Given UDP works differently from TCP & HTTP protocols..
How can I establish successful UDP communication between two docker containers running on same host (MacOS)?
Various things that I have tried, but no luck.
Tried running both containers using --network host option.
Tried creating a new docker network testnet and started containers using --network testnet option.
Never mind. I found the solution.
First, it was not a docker thing at all.
In my python application on Container-2, I used environment variables to determine the UDP address. Apparently, these variables were not set properly. Hence, the confusion/error.
Second, "--network host" is still a VALID argument to have for both running Docker containers to make sure they discover/talk to each other.
Hope it helps!
I am trying to connect my BACNET client which has been containerized and the BACNET server which is running on the host machine. I am using Docker for Windows on Windows 10 (host machine) with Linux containers.
I have tried the following:
a. Publishing the ports 47808 for the client container with the run command.
b. Running the container with network=host, to access services of localhost.
c. Tried specifying the gateway IP as the server's IP address with run command.
d. Running the container in the same subnet as my server
e. Running the container with the host IP specified and the ports published.
My bacnet server, taken from https://sourceforge.net/projects/bacnet/ always connects to the DockerNAT, 10.0.75.1? Any idea why does this happens? The server application is not a container but an executable file.
Server IP:10.0.75.1 (dockerNAT)
Client container running on host machine.
From a quick google:
For Windows containers this component is not used and containers and
their ports are only accessible via the NATed IP address.
With respect to BACnet, this is going to put you in a world of hurt. You will have to use BACnet BBMD with NAT support in your container to achieve this, and your BACnet Client will have to register as a BACnet Foreign Device. The BACnet Stack at SourceForge does seem to have some NAT support (the code seems to be there but I have never tested it in its original form).
So what you are seeing is 'expected', but your solution is going to require that you become much more familiar with BACnet BBMDs than you ever want to be. Read the BACnet specification carefully. Good luck.