Can avahi-daemon send mDNS service "down" on OS shutdown? - avahi

I have set up a avahi daemon to publish my service on a RPi, and have successfully found the service on a different PC using an mDNS client library. The mDNS library (javascript) has event callbacks for the service being "up" or "down". The service is specified by a .conf in /etc/avahi/services/.
When the RPi is shutdown, I would like the PC side client to know, is there a way to have the avahi daemon (running on the RPi) to broadcast the service going "down"?
My thinking is that the avahi daemon (presumably) gets an OS signal that the RPi is shutting down, so would it make sense for the daemon to update the mDNS clients that the service is going down? I don't see any documentation to support this, so I guess this is not reasonable.
Maybe the way forward is for my application to broadcast the service is going down. I have not investigated my application creating an mDNS publishing service because I was using the avahi daemon to do the publishing. But as I write this, it seems this is the way forward I should try.

Related

Securing docker containers between 2 servers

One of my RPI's (3B+) (192.168.0.3) is running out of memory so I want to remove NginxProxyManager running in docker container from RPI to save some memory.
I put couple of the containers running on RPI (192.168.0.3) behind another NginxProxyManager running on my main server (192.168.0.2). So far so good.
The only problem with this solution I have is that you can access the containers with RPI's IP and port number from any device on the same network and if I think correctly the data between NPM on my main server and RPI containers are not encrypted (some containers do not use HTTPS).
The connection is on my local LAN so it should be secure and there should not be any snooping but still I would like to create some kind of direct tunnel between 192.168.0.2 and 192.168.0.3 (certain ports and containers only).
What would be the proper way to allow ONLY my main server to certain ports on my RPI?
Or am I worrying too much? ;-)

Why can my Docker app receive UDP data without publishing the port?

I'm learning Docker networking. I'm using Docker Desktop on Windows.
I'm trying to understand the following observations:
Short version in a picture:
Longer version:
First setup (data from container to host)
I have a simple app running in a container. It sends one UDP-datagram to a specific port on the host (using "host.docker.internal")
I have a corresponding app running on the host. It listens to the port and is supposed to receive the UDP-datagram.
That works without publishing any ports in docker (expected behavior!).
Second setup (data from host to container)
I have a simple app on the host. It sends one UDP-datagram to a specific port on the loopback network (using "localhost")
I have a corresponding app running in a container. It listens to the port and is supposed to receives the UDP-datagram.
That works only if the container is run with option -p port:port/udp (expected behavior!).
Third setup (combination of the other two)
I have an app "Requestor" running in a container. It sends a UDP request-message to a specific port on the host and then wants to receive a response-message.
I have a corresponding app "Responder" running on the host. It listens to the port and is supposed to receive the request-message. Then it sends a UDP response-message to the endpoint of the request-message.
This works as well, and - that's what I don't understand - without publishing the port for the response-message!
How does this work? I'm pretty sure there's some basic networking-knowledge that I simply don't have already to explain this. I would be pleased to learn some background on this.
Sidenote:
Since I can do curl www.google.com successfully from inside a container, I realize that a container definitely must not publish ports to receive any data. But there's TCP involved here to establish a connection. UDP on the other hand is "connectionless", so that can't be the (whole) explanation.
After further investigation, NAT seems to be the answer.
According to these explanations, a NAT is involved between the loopback interface and the docker0 bridge.
This is less recognizable with Docker Desktop for Windows because of the following (source):
Because of the way networking is implemented in Docker Desktop for Windows, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.

SSH Tunnel within docker container

I have a client running in a docker container that subscribes to a MQTT broker and then writes the data into a database.
To connect to the MQTT Broker i will have to set up port forwarding.
While developing the client on my local machine the following worked fine:
SSH -L <mqtt-port:1883>:localhost:<9000> <user>#<ip-of-server-running-broker>
The client is then configured to subscribe to the MQTT broker via localhost:9000.
This all works fine on my local machine.
Within the container it wont, unless I run the container with --net=host but I'd rather not do that due to security concerns.
I tried the following:
Create docker network "testNetwork"
Run a ssh_tunnel container within "testNetwork" and implement port forwarding inside this container.
Run the database_client container within "testNetwork" and subscribe to the mqtt broker via the bridge network like ("ssh_tunnel.testNetwork:")
(I want 2 seperate containers for this because the ip address will have to be altered quite often and I don't want to re-build the client container all the time)
But all of my attempts have failed so far. The forwarding seems to work (I can access the shell on the server in the ssh container) but I haven't found a way to actually subscribe to the mqtt broker from within the client container.
Maybe this is actually quite simple and I just don't see how it works, but I've been stuck on this problem for hours by now...
Any help or hints are appreciated!
The solution was actually quite simple and works without using -net=host.
I needed to bind to 0.0.0.0 and use the Gateway Forwarding Option to allow remote hosts (the database client) to connect to the forwarded ports.
ssh -L -g *:<hostport>:localhost:<mqtt-port/remote port> <user>#<remote-ip>
Other containers within the same Docker bridge network can then simply use the connection string <name-of-ssh-container>:<hostport>.

routing broadcast UDP in/out of Docker for Mac container

I want to run an application (the OLA server, olad) inside a container under Docker for Mac. (Version 18.06.1-ce-mac73 on Mojave, all up-to-date.) The particular OLA configuration I am using (for the Art-Net protocol) works by sending and receiving UDP broadcast data over port 6454 on a particular physical ethernet interface on the host, which is in turn connected to an external device under control. Normally, when starting the olad server, one specifies the interface or IP address on which it should send/receive the broadcast messages.
My struggle is getting the UDP messages to and from the interface from inside the container. I don't appear to have access to that physical interface or network inside the Docker for Mac container, even if I run with --network host. My understanding is that this is because of a quirk of the way Docker for Mac is implemented, with an extra VM between my container and the hardware. That VM sees the hardware, but I don't.
Simply running the docker instance with -p 6454:6454/udp doesn't work, either, maybe unsurprisingly. I could see where that might allow incoming traffic to the container to find its way to the server, but the server inside still can't find the outside network/device in the other direction. And I'm not sure how OSX would necessarily get that data from the interface to the docker bridge anyway.
How can I get direct, bidirectional access to that interface or network from inside the container? Or if I cannot, is there some kind of workaround, maybe via socat where I could tunnel that network in through a Unix socket that is shareable between host and container?

Is there a way to "hibernate" a linux container

Say you had a bunch of wordpress containers running on a machine with each application sitting behind cache. Is there a way to stop a container and start it only if the url is not found in cache?
systemd provides a Socket Activation feature that can activate a service on tcp connection and proxy the connection in. Atlassian have a detailed article on using it with Docker.
I don't believe systemd has the ability to stop the service when there is no activity. You will need something that can close down the service after there are no connections left being served. This could be done in the wordpress app container or externally via systemd on the host.
Some more socket reading from the systemd developer:
http://0pointer.de/blog/projects/socket-activated-containers.html
http://0pointer.de/blog/projects/socket-activation2.html
http://0pointer.de/blog/projects/socket-activation.html

Resources