Securing docker containers between 2 servers - docker

One of my RPI's (3B+) (192.168.0.3) is running out of memory so I want to remove NginxProxyManager running in docker container from RPI to save some memory.
I put couple of the containers running on RPI (192.168.0.3) behind another NginxProxyManager running on my main server (192.168.0.2). So far so good.
The only problem with this solution I have is that you can access the containers with RPI's IP and port number from any device on the same network and if I think correctly the data between NPM on my main server and RPI containers are not encrypted (some containers do not use HTTPS).
The connection is on my local LAN so it should be secure and there should not be any snooping but still I would like to create some kind of direct tunnel between 192.168.0.2 and 192.168.0.3 (certain ports and containers only).
What would be the proper way to allow ONLY my main server to certain ports on my RPI?
Or am I worrying too much? ;-)

Related

How to connect Windows application to Docker network?

I have a legacy system. It contains a number of servers running on Linux and a number of GUI clients running on Windows. All the components (servers and clients) are in the same network and they communicate with each other directly. They are identified by ip and port number.
For development purpose, I now run the servers in containers using compose on a Linux host. The servers communicate with each other within the docker network without any issues. However, I have trouble to make the client work with servers. Port mapping doesn't work here since a client needs to talk to many servers with different or same port. What I am asking is if it is possible to treat the Windows client as part of the docker network. I read about tools such as weave net, etc., but haven't found anything useful. Any suggestions?

How to make MQTT broker docker image accessible to devices in LAN?

I have a a docker container with a few images running there. I run them via docker-compose up command. On my device everything works well with localhost but I want to make so that other devices in the same network will be able to access the MQTT broker as well. How do I do that?
Currently, in my code I do this:
ws:localhost:9001
But since this localhost applies only for the device that runs docker, another laptop won't be able to use it. How do I solve that?
You use the LAN IP address of your machine (the one hosting the docker containers) in place of localhost.
We have no way of knowing what that address may be, but it could start with 192.168.x.x or may be 10.x.x.x
By default, Docker has a "bridge" network that will bridge your container to the outside world. Just use the IP address of the computer where your MQTT Broker Container is running, and port 9001, and it should work fine.
If you need to run it on an internal Docker network, you will have to use something like an ADC or TCP Proxy of some sort to allow access to it.

Open TCP connection to specific node in docker swarm

Question:
How can I access specific containers inside a docker swarm network from outside the network?
I don't need to access arbitrary ports, the exposed container ports are fine, but I need to be able to connect to a specific container, not just any container I am routed to via load balancing.
As in, I can currently do:
curl localhost:8582/service_id
And get something like:
1589697532253.0.8570331623512102
But the result varies, because it is load balanced to a different container each time I make the request. I only need this for debugging, I usually want the load balancing behavior, but when there is an issue with a specific container it is essential that I make requests only to that container.
I can do it within a container inside the network, but it is a lot easier to debug from my local machine, instead of inside a container.
Environment:
I am not sure if it is relevant, but I am on windows, running docker desktop, engine v19.03.8.
Things I tried:
I tried tunneling into the docker network with wireguard, however I believe that is a non-starter because my host OS is windows, and I can't find any wireguard images that support non-linux host OSes (and I'm not sure that is even technically possible).
When I run docker network inspect ingress -v I can see there appears to be IPs associated with each container (10.0.0.12, 10.0.0.13) which differ from the IPs on the overlay network (10.0.18.7, 10.0.18.8), but when I try to access my exposed port over any of those IPs, the connection attempt is ignored and does not connect.
I tried adding a specific network route to make sure the packets were going to docker, by forcing all packets in the /24 address range to go through the docker gateway, but that didn't work either (route add -p 10.0.0.0 MASK 255.255.255.0 192.168.8.177 METRIC 1 IF 49).
Any suggestions would be greatly appreciated!

Flask in docker, access other flask server running locally

After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.

Connecting to BACNET Server on Host Machine Using Client Container

I am trying to connect my BACNET client which has been containerized and the BACNET server which is running on the host machine. I am using Docker for Windows on Windows 10 (host machine) with Linux containers.
I have tried the following:
a. Publishing the ports 47808 for the client container with the run command.
b. Running the container with network=host, to access services of localhost.
c. Tried specifying the gateway IP as the server's IP address with run command.
d. Running the container in the same subnet as my server
e. Running the container with the host IP specified and the ports published.
My bacnet server, taken from https://sourceforge.net/projects/bacnet/ always connects to the DockerNAT, 10.0.75.1? Any idea why does this happens? The server application is not a container but an executable file.
Server IP:10.0.75.1 (dockerNAT)
Client container running on host machine.
From a quick google:
For Windows containers this component is not used and containers and
their ports are only accessible via the NATed IP address.
With respect to BACnet, this is going to put you in a world of hurt. You will have to use BACnet BBMD with NAT support in your container to achieve this, and your BACnet Client will have to register as a BACnet Foreign Device. The BACnet Stack at SourceForge does seem to have some NAT support (the code seems to be there but I have never tested it in its original form).
So what you are seeing is 'expected', but your solution is going to require that you become much more familiar with BACnet BBMDs than you ever want to be. Read the BACnet specification carefully. Good luck.

Resources