I have a legacy system. It contains a number of servers running on Linux and a number of GUI clients running on Windows. All the components (servers and clients) are in the same network and they communicate with each other directly. They are identified by ip and port number.
For development purpose, I now run the servers in containers using compose on a Linux host. The servers communicate with each other within the docker network without any issues. However, I have trouble to make the client work with servers. Port mapping doesn't work here since a client needs to talk to many servers with different or same port. What I am asking is if it is possible to treat the Windows client as part of the docker network. I read about tools such as weave net, etc., but haven't found anything useful. Any suggestions?
Related
One of my RPI's (3B+) (192.168.0.3) is running out of memory so I want to remove NginxProxyManager running in docker container from RPI to save some memory.
I put couple of the containers running on RPI (192.168.0.3) behind another NginxProxyManager running on my main server (192.168.0.2). So far so good.
The only problem with this solution I have is that you can access the containers with RPI's IP and port number from any device on the same network and if I think correctly the data between NPM on my main server and RPI containers are not encrypted (some containers do not use HTTPS).
The connection is on my local LAN so it should be secure and there should not be any snooping but still I would like to create some kind of direct tunnel between 192.168.0.2 and 192.168.0.3 (certain ports and containers only).
What would be the proper way to allow ONLY my main server to certain ports on my RPI?
Or am I worrying too much? ;-)
I am trying to connect my BACNET client which has been containerized and the BACNET server which is running on the host machine. I am using Docker for Windows on Windows 10 (host machine) with Linux containers.
I have tried the following:
a. Publishing the ports 47808 for the client container with the run command.
b. Running the container with network=host, to access services of localhost.
c. Tried specifying the gateway IP as the server's IP address with run command.
d. Running the container in the same subnet as my server
e. Running the container with the host IP specified and the ports published.
My bacnet server, taken from https://sourceforge.net/projects/bacnet/ always connects to the DockerNAT, 10.0.75.1? Any idea why does this happens? The server application is not a container but an executable file.
Server IP:10.0.75.1 (dockerNAT)
Client container running on host machine.
From a quick google:
For Windows containers this component is not used and containers and
their ports are only accessible via the NATed IP address.
With respect to BACnet, this is going to put you in a world of hurt. You will have to use BACnet BBMD with NAT support in your container to achieve this, and your BACnet Client will have to register as a BACnet Foreign Device. The BACnet Stack at SourceForge does seem to have some NAT support (the code seems to be there but I have never tested it in its original form).
So what you are seeing is 'expected', but your solution is going to require that you become much more familiar with BACnet BBMDs than you ever want to be. Read the BACnet specification carefully. Good luck.
I have multiple docker stacks for different web application running on the same machine.
I'm already using nginx-proxy to access each app with its own host name.
This spares me from publishing a different http port for each app.
Is there any way of doing something similar with other services, like databases?
I'd like to be able to connect to each database with my tools (like Mysql Workbench and such) without explicitly publishing a different port for each database.
EDIT: the MacVlan network driver seems to be an option on a Linux host, but it's not available on Windows.
In order to debug and setup a pair of docker stacks (one is a client and other a server along with their own private services they each require) using docker compose, I'm running them locally to make sure they're functioning correctly.
They will eventually be communicating across the internet with a nginx server on the server side to act as a reverse proxy. But for now, i'm specifying the client use the 172.19.0.3:1234 address of the server container.
I'm able to curl/ping both the client container and server container from the host machine, but running an interactive session and trying to curl the server's 172.19.0.3:1234 address just times out.
I feel the 172.x is being used incorrectly here. Is their some obvious issue with what I've described so far? What is the better approach for what I'm trying to do.
Seems that after doing some searching, I am in a similar situation to this question: Communicating between Docker containers in different networks on the same host.
I've decided to use docker network connect to connect the client to the server's network for my purposes.
I'm trying to use Apache Kafka, i.e. a version usable in connection with docker (wurstmeister/kafka-docker), in a Virtual Machine and connect to the kafka broker in the dockers over the host system of the VM.
I will describe the setup in more detail:
My host system is an Ubuntu 64 bit 14.04.3 LTS (Kernel 3.13) running on an "usual" computer. I have a complete and complex structure of various docker-containers interacting with each other. In order to not disturb, or better said, to encapsulate this whole structure, it is not an option to run the docker images directly on the host system. Another reason for that is the need for different python libraries on the host, which interfere with the python-lib version required by docker-compose (which is used to start the different docker images).
Therefore the desired solution should be to setup a virtual machine over VirtualBox (guest system: Ubuntu 16.04.1 LTS) and to completely run the docker environment in this VM. This has the distinct advantage that the VM itself can be configured exactly according to the requirements of the docker structure.
As mentioned above one of the docker images provides kafka and zookeeper functionality to use for communication and messaging. That means the .yaml file, which sets up a container running this image, forwards all necessary ports of kafka and zookeeper to the host system of the docker environment (which is the guest system of the virtual box VM). In order to also make the docker environment visible in the host system, I forwarded all the ports over network settings in VirtualBox (network -> adapter NAT -> advanced -> port forwarding). The behaviour of the system is as follows:
When I run the docker-environment (including kafka) I can connect, consume and produce from the VM-kafka, which uses the recommended standard kafka shell scripts (producer & consumer api).
When I run a Kafka and zookeeper server ON THE VM guest system, I can connect from outside the VM (host), produce and consume over producer & consumer api.
When I run Kafka in the docker environment I can connect from the host system outside the VM, meaning I can see all topics, get infos about topics, while also seeing some debug output from kafka and zookeeper in the docker.
What is not possible is unfortunately to produce or consume any message from the host system to/from the docker-kafka. From the producer API I get a "Batch expired" exception, the consumer returns with a "ClosedChannelException".
I have done a lot of googling and found a lot of hints how to solve similiar problems. Most of them refer to the advertised.host.name-parameter in kafka-server.properties, which is accessible over KAFKA_ADVERTISED_HOST_NAME in .yaml. https://stackoverflow.com/a/35374057/3770602 e.g. refers to that parameter when both the errors mentioned above occur. Unfortunately none of the scenarios feature both docker AND VM functionality.
Further more trying to modify this parameter does not have an affect at all. Although I am not very familiar with docker and kafka, I can see a problem here, as the kafka consumer and producer would get the local IP of the docker environment, which is 10.0.2.15 in the NAT case, to use as broker. But unfortunately this IP is not visibile from outside the VM. Consequently the solution would probably be a changed setup, where one should use the bridged mode in VirtualBox networking. Strange thing here is, that a bridged connection (of course) leads to an own IP of the VM over DHCP, which then leads to a non-accessible docker-kafka both from the VM AND the host system. This last behaviour seems to be very awkward to me.
Concluding my question would be, if somebody has some experiences with this or a similiar scenario and can tell me, how to configure docker-kafka, VirtualBox VM and host system. I have really tried a lot of different settings for the consumer and producer call with no success. Of course some docker, kafka or both docker & kafka experts are welcome to answer as well.
If you need additional information or you can provide some hints, please let me know.
Thanks in advance