Accessing container directly - docker-swarm

After having set up my swarm network with 3 hosts (manager1, worker1, worker2), I created an overlay network :
docker network create --driver=overlay testNet
Then created a service, based on couchbase (for example, any other image exposing a non stateless web ui is having same issue)
docker service create --name db --network=testNet --publish 8091:8091 couchbase
If I try to access the web ui located on port 8091, everything works fine, until I start scaling the service to 2 (or more).
docker service scale db=2
At this point, the swarm load balancer keeps redirecting the requests between the 2 containers, rendering the web ui unusable.
Is there a way to solve this ?

I do not personally recommend using Docker Swarm in conjunction with Couchbase Server as it does have an impedance mismatch. It would be more sensible to specifically create a Couchbase container on each server which requires it (and only one per server to avoid single points of failure!) and then to forward the appropriate ports on that server.
In spite of this, the answer in this case would be to access each node/container individually, you can do this by accessing the container via it's private IP. You can identify this IP by grabbing the container ID using docker ps and then looking for the IP in docker inspect <container id>.

Related

docker container networking to connect localhost in one container to another

I am using the default bridge network for docker (and yes, I am relatively new to docker). I have two docker containers.
The first container provides a service on port 12345. When creating this container, I did not specify the --publish option because I did not want to expose this port to the outside world.
The second container needs to use the service from the first container. However, the application running in this second container was hardcoded to access the service at 127.0.0.1:12345. Clearly, the second container's localhost is not the same as the first container. Is there a way to course docker networking to think that localhost in the second container should actually be connected to the port in the first container, without exposing anything to the outside world?
Option N: (this works but may not be the best solution)
One way you can force this to behave the way you need is through injecting an additional service to bind to the port within on the application container and redirecting it outward.
socat TCP-LISTEN:12345,fork TCP:172.18.0.2:12345
A quick test here, I was able to confirm 127.0.0.1:12345 is treated as the remote 12345
Things to consider:
The two containers needs to be able to reach each other
It breaks the recommendation of one service per container.
Getting the app into the docker container. (yum / apt-get install socat, source build = ?)
Getting it to run on startup on container start/restart.

Adding NS record to docker net's DNS server

When running a docker container inside a docker network (i.e. docker network create $DOCKERNETNAME and then using --net=$DOCKERNETNAME when running the container). The net creates a DNS server at 127.0.0.11.
I want to create a NS record inside this DNS server (the one running at 127.0.0.1), so I can have a separate DNS server inside the docker net for some fake domain. How can I do that?
Please note that all this is being done for educational purposes and has no other goal.

centralized data storage in docker swarm

I have one serious doubt in docker swarm . I have created the docker-machine in VM
manager1
worker1
worker2
And joined all the worker to manager and create the service like
docker service create --replicas 3 -p 80:80 --name web nginx
I change the index.html in docker service in manager1
When I run the url like http://192.168.99.100 it showing the index.html file that I have changed but the remaining 2 node showing the default nginx page
What is the concept of swarm ? Whether it is used only for the service failure ?
How to make the centralized data storage in docker swarm.
There are a few approaches to ensuring the same app and data are available on all nodes.
Your code (like nginx with your web app) should be built into an image, sent to a registry, and pulled to a Swarm service. That way the same app/code/site is in all the containers of that service.
If you need persistent data, like a database, then you should use a plugin volume driver that lets you store the unique data on shared storage.

Docker running same VPN service in different containers and load balance these requests

I am trying to setup an image which basically runs a VPN server inside of docker. Now this VPN server by default listens on port 443. Several clients could connect to this VPN server to access corporate websites. How can I spawn multiple docker containers each running the same VPN server, but mapped to a common port 443 on the host? and load balance the request to these containers? I understand nginx is a reverse proxy, but does this also work for raw tcp/udp requests which is the use case I am trying to implement?
For now, I have one image up and running and I am using -p 443:443 while running this container and route the incoming request to this container. I want to create several replicas of this container and then load balance the incoming tcp/udp requests.
Any thoughts would be appreciated.
Your problem is best solved using Docker swarm. You need to create a docker service. The service allows you to create multiple instances of a container.
Swarm also has built-in support for loadbalancing. It provides by default DNS round robbin load balancing
docker swarm init
docker service create --replicas 5 -p 443:443 <image-name> <service-name>

Inter-container connection via localhost

I would like to set up my containers so that they connect to each other via localhost.
My setup is a main application container and two other containers that it needs to connect to (ActiveMQ and Wiremock).
I already run ActiveMQ and Wiremock in containers with the relevant ports exposed, and the main application runs through IntelliJ and connects to these. However, when I am not developing the main applications, I would like to run it in a container for simplicity but it cannot connect to the ports exposed by the others.
Setting --net=host doesn't seem to work, nor does creating a network docker network create <NAME> and assigning it in the docker run with --net=<NAME>.
The application already runs in a container in other environments on the host network.
docker creates a default network in which all containers run, and sets a network name for each of your containers, using the container name.
if you have a contained named mq for your ActiveMQ, then you would use something like tcp://mq:61616 (or whatever protocol / port you have configured) from your other containers, to connect to it.
you shouldn't need to set the --net option unless you need to create a specific network for specific containers to use.

Resources