docker container as router to route traffic to all docker containers - docker

I am working on a project which requires me to have 3 containers on 3 different docker networks and add another docker container in front of these 3 as a router. (Trying to simulate a network and packet capture at each scope). How do I set up the router container which is aware of all the other 3 networks and just sits there and routes traffic along. I looked around, but have no luck in getting through this. Any help would be appreciated.
I've tried setting up a router container with just ubuntu and use that as the net by using --net=router-container but that shares the network the router container is on with other containers giving them the same IP, which is not essentially what I would need. A setup that I was looking for would be something like this.

Related

Docker Container Containing Multiple Services

I am trying to build a container containing 3 applications, for example:
Grafana;
Node-RED;
NGINX.
So I will just need to expose one por, for example:
NGINX reverse proxy on port 3001/grafana redirects to grafana on port 3000 and;
NGINX reverse proxy on port 3001/nodered redirects to nodered on port 1880.
Does it make any sense in your vision? Or this architecture is not feasible if compared to docker compose?
If I understand correctly, your concern is about opening only one port publicly.
For this, you would be better off building 3 separate containers, each with their own service, and all in the same docker network. You could plug your services like you described within the virtual network instead of within the same container.
Why ? Because containers are specifically designed to hold the environment for a single application, in order to provide isolation and reduce compatibility issues, with all the network configuration done at a higher level, outside of the containers.
Having all your services inside the same container thwart these mentioned advantages of containerized applications. It's almost like you're not even using containers.

Can't access local docker container when using VPN

I have a couple docker containers running on my local machine (pgadmin, jupyter notebooks etc) and have them mapped to various ports. I can happily navigate to localhost:10100 to get to the pgadmin web interface, for example. The issue is that when I connect to the work VPN I am unable to get to any locally running containers. I get an "ERR_CONNECTION_RESET" error on chrome.
With the VPN on I've tried:
localhost:10100 (also tried 127.0.0.1)
my-hostname:10100
192.168.0.X:10100 (the wifi interface address)
192.168.19.X:10100 (the VPN TUN interface address)
I can ping any of the above addresses and get a response and can successfully use them when the VPN is disabled. Using PulseVPN, Ubuntu 21.10, and fairly recent docker/docker-compose if that helps.
You can try to run the containers with the host network by adding the flag:
--network host
to the end of each command when you first start the containers.
And if that does not work, you can try it with:
--network none
Turns out that are a combination of issues that are causing problems. I haven't found a bullet proof solution yet but here are some breadcrumbs for someone else:
The default docker network subnet was overlapping with my work subnet.
The VPN route was set to have the lowest cost, therefor all traffic is being routed through it.
Changing the default subnet resulted in the containers working, for around 5 minutes. Then the low cost routing was discovered and my traffic went through there instead.
My guess is that I have to fiddle with my network routing so that the docker networks are separated from the work VPN. It's been a decade since my CCNA so I can't remember how to do this off hand...

Open TCP connection to specific node in docker swarm

Question:
How can I access specific containers inside a docker swarm network from outside the network?
I don't need to access arbitrary ports, the exposed container ports are fine, but I need to be able to connect to a specific container, not just any container I am routed to via load balancing.
As in, I can currently do:
curl localhost:8582/service_id
And get something like:
1589697532253.0.8570331623512102
But the result varies, because it is load balanced to a different container each time I make the request. I only need this for debugging, I usually want the load balancing behavior, but when there is an issue with a specific container it is essential that I make requests only to that container.
I can do it within a container inside the network, but it is a lot easier to debug from my local machine, instead of inside a container.
Environment:
I am not sure if it is relevant, but I am on windows, running docker desktop, engine v19.03.8.
Things I tried:
I tried tunneling into the docker network with wireguard, however I believe that is a non-starter because my host OS is windows, and I can't find any wireguard images that support non-linux host OSes (and I'm not sure that is even technically possible).
When I run docker network inspect ingress -v I can see there appears to be IPs associated with each container (10.0.0.12, 10.0.0.13) which differ from the IPs on the overlay network (10.0.18.7, 10.0.18.8), but when I try to access my exposed port over any of those IPs, the connection attempt is ignored and does not connect.
I tried adding a specific network route to make sure the packets were going to docker, by forcing all packets in the /24 address range to go through the docker gateway, but that didn't work either (route add -p 10.0.0.0 MASK 255.255.255.0 192.168.8.177 METRIC 1 IF 49).
Any suggestions would be greatly appreciated!

How to connect to containers on docker Overlay network from an external machine

Is there any known solution for enabling an external machine to connect to the containers on a docker swarm overlay network?
The question is legitimate, see example below, however I do not know of a simple solution for it, I'll propose offhand a possible solution and would test later and update.
Suppose you have a docker overlay network of many Kafkas running on a
couple of nodes (container hosts). All Kafka brokers communicate with
each other beautifully.
When a Kafka client needs to access a Kafka broker it connects
to it (say somehow, supposedly even through Swarm's service external
port), but then that broken may reply that that data is in another
broker with and here is that other broker's IP (on the overlay
network)... meaning Kafka client must be able to access to all Kafka
brokers (overlay network).
You can do this easily if everything is containerized, but what if not?
You can do this with SDN, or an offhand solution:
A container with two networks serving as a router with one "leg" on the overlay network and the other l2bridged to where that other VM or host is and route through it, you'd have to Swarm "constrain" it to run where the network from which you want overlay network access is available. that should work!
If someone has another clean/clear solution I'm very interested too

Docker host information and cluster

I am setting up a simple cluster using docker on several hosts. Before using docker the processes were simply started with a argument giving the address to a config server. The first thing each process does is to connect to the config server, get the addresses (host and port) of all the other services as well as register itself with host (and several different ports, one for each the services it provides).
However, it does not seem to be possible to dockerize this workflow? Since a process in a container seems not to be able to get the address and ports on the host (based on for example How to get the IP address of the docker host from inside a docker container) it does not know what to register itself as. Is this really not possible?
If not, are there any alternative ways this sort of setup is intended to be run using docker?

Resources