How we can map a common ip to all worker ip in docker swarm - docker

As we know that in docker swarm we create multiple worker and one manager.
The conatiner is running in mutliple worker. So we can access that in the browser by putting that worker node ip and then port like (ip:80). We can access another worker node by putting their ip and port. But What if I want that I run put one commone IP and run the container. So if anyh one of the nodes goes down then It my site does not goes down. it use another runnig worker.
worker1: 192.168.99.100:80
wokere2: 192.168.99.100:80
worker3: 192.168.99.100:80
I want one common IP so that if any one goes down the it should not goes down.

You basically have primarily two ways of doing this:
You can put an HTTP proxy in front of the docker swarm, and then this proxy has health check routes on the node if any goes down the proxy removes the IP of the downed node from rotation until it comes back up (traefik, Nginx, caddy, ...).
You can use keepalived, and with this approach, you point the domain to your virtual VRRP IP which then is "floating" around.
I know a very good ops person and they use keepalived in their company without any issues and complications. In our company, we decided to go with the proxy because we also route other "traffic" over them to different systems (legacy, ...) and we have VMWare's top licence with veeam and we can solve real-time duplications (in case of VM going down and such) with that.
So both methods are proven and tested :)

Related

Run two applications on same port on same machine

I had a interview 3 years back and in one of the design interview rounds a question came up ,how can you have two java application (Deployed on tomcat) run on the same port . You can use any tools like docker etc but you can't have a separate virtual machine (like Vmware or VM virtual box) . I am not sure if docker can be used (I just said may be we can use two docker containers, not sure if it would be the right approach) . Any ideas if its possible and how .
You can't have 2 programs that use the same port.
To solve that, you can set up a reverse proxy (Nginx, Traefik or the like) that listens on the port and then routes the traffic to the applications based on what the requests look like. The applications would listen on their own ports. So one port each.
You can route on different things, but in your case you might set it up so requests that start with /app1/ go to one application and requests that start with /app2/ go to the other.
Nginx and Traefik both have standard images available that are pretty easy to set up in Docker.
You can't have two processes listen to the same ports on the same IP address on the same machine.
To work around this, as Hans Kilian says, you'll need a reverse proxy.
Alternatively, if the machine's network interfaces are configured for multiple IP addresses, you can assign one to each running server - and then you're free to use the same port on the other IP-address(es). This is independent of the actual server that you use - be it Tomcat, Docker, or anything else.
Naturally, configuring the different processes to listen to specific IP addresses is dependent on the software itself. As you're asking about Tomcat: It's connectors (see server.xml) have the configuration.
I consider Reverse Proxies more the standard approach found in the wild, but as you were talking about an interview question: This is another option
The only approach to have this scenario is to use reverse proxy which can route the request to the specific java applications based on url match and redirect logic.
The apps must be running on different ports, eg. in case of nginx as proxy create two server blocks with each having a location block.

Multi-host Docker network with Swarm-mode and without swarm

I am migrating legacy application deployed on two physical servers[web-app(node1) and DB(node2)].
Though following blog fullfilled my requirement. but still some questions
https://codeblog.dotsandbrackets.com/multi-host-docker-network-without-swarm/#comment-2833
1- For mentioned scenario web-app(node1) and DB(node2), we can use expose port options and webapp will use that port, why to create overlay network?
2- By using swarm-mode with replica=1 we can achieve same, so what advantage we will get by using creating overlay network without swarm mode?
3- if node on which consul is installed, it goes down our whole application is no more working.(correct if understanding is wrong)
4- In swarm-mode if manager node goes down(which also have webapp) my understanding is swarm will launch both containers on available host? please correct me if my understanding is not correct?
That article is describing an outdated mode of operation for 'Swarm'. What's described is 'Classic Swarm' that needed an external kv store (like consul) but now Docker primarily uses 'Swarm mode' (which is an orchestration capability built in to the engine itself). To answer what I think your questions are:
I think you're asking, if we can expose a port for a service on a host, why do we need an overlay network? If so, what happens if the host goes down and the container gets re-scheduled to another node? The overlay network takes care of that by keeping track of where containers are and routing traffic appropriately.
Not sure what you mean by this.
If consul was a key piece of discovery infra then yes, it would be a single point of failure so you'd want to run it HA. This is one of the reasons that the dependency on an external kv was removed with 'Swarm Mode'.
Not sure what you mean by this, but maybe about rebalancing? If so then yes, if a host (with containers) goes down, those containers will be re-scheduled on another node.

Docker Swarm with external Load Balancer - how to get collective healthcheck

I'm hoping that there are some docker swarm experts out there who have configured a load balancer to front a docker swarm multi-node setup. In such a simplified architecture, if the load balancer needs to detect if a manager node is down and stop routing traffic to it, what is the "best practice" for that? Does Docker swarm provide a health endpoint (api) that can be tested for each manager node? I'm new to some of this and there doesn't seem to be a lot out there that describes what I'm looking for. Thanks in advance
There is the metrics endpoint of the engine, and then the engine api, but I don't think that's what you want by an application load balancer.
What I see most people do is put a load balancer in front of the Swarm nodes they want to handle incoming traffic for specific apps running in services, and since that LB needs to know if the containers are responding (not just the node's engine health) they should hit the apps health endpoint, and take nodes in and out of that apps LB based on the app response.
This is how AWS ELB's work out of the box, for example.
If you had a published service on port 80 in the Swarm, you would setup your ELB to point to the nodes you want to handle incoming traffic, and have them expect a healthy 200/300 return on those nodes. It'll remove nodes from the pool if they return something else or don't respond.
Then you could use a full monitoring solution that checks node health and optionally respond to issues like replacing nodes.

Docker swarm load balancing - How to give common name to the service?

I read swarm routing mesh
I create a simple service which uses tomcat server and listens at 8080.
docker swarm init I created a node manager at node1.
docker swarm join /tokens I used the token provided by the manager at node 2 and node 3 to create workers.
docker node ls shows 5 instances of my service, 3 running at node 1, 1 running at node 2, another one is at node 3.
docker service create image I created the service.
docker service scale imageid=5 scaled it.
My application uses atomic number which is maintained at JVM level.
If I hit http://node1:8080/service 25 times, all requests goes to node1. How dose it balance node?
If I hit http://node2:8080/service, it goes to node 2.
Why is it not using round-robin?
Doubts:
Is anything wrong in the above steps?
Did I miss something?
I feel I am missing something. Like common service name http://domain:8080/service, then swarm will work in round robin fashion.
I would like to understand only swarm mode. I am not interested external load balancer as of now.
How do I see swarm load balance in action?
Docker does round robin load balancing per connection to the port. As long as a connection is up, it will continue to go to the same instance.
Http allows a connection to be kept alive and reused. Browsers take advantage of this behavior to speed up later requests by leaving connections open. To test the round robin load balancing, you'd need to either disable that keep alive setting or switch to a command line tool like curl or wget.

Failing to see how ambassador pattern enhances modularity / simplicty of container architecture in Docker

I fail to see how implementing the ambassador pattern would help us simplify / modularize the design of our container architecture.
Let's say that I have a database container db on host A and is used by a program db-client which sits on host B, which are connected via ambassador containers db-ambassador and db-foreign-ambassador over a network:
[host A (db) --> (db-ambassador)] <- ... -> [host B (db-forgn-ambsdr) --> (db-client)]
Connections between containers in the same machine, e.g. db to db-ambassador, and db-foreign-ambassador to db-client are done via Docker's --link parameter while db-ambassador and db-foreign-ambassador talks over the network.
But , --link is just a fancy way of inserting ip addresses, ports and other info from one container to another. When a container fails, the other container which is linked to it does not get notified, nor will it know the new IP address of the crashing container when it restarts. In short, if a container which is linked to another went dead, the link is also dead.
To consider my example, lets say that db crashed and restarts, thus get assigned to a different IP. db-ambassador would have to be restarted too, in order to update the link between them... Except you shouldn't. If db-ambassador is restarted, the IP would have changed too, and foreign-db-ambassador won't know where to reach it at the new IP address.
Quoting an article in the Docker's docs about the ambassador pattern,
When you need to rewire your consumer to talk to a different Redis
server, you can just restart the redis-ambassador container that the
consumer is connected to.
This pattern also allows you to transparently move the Redis server to
a different docker host from the consumer.
it seems like this is exactly the problem it is trying to solve. Which, as far as my understanding goes, it totally didn't. Not if you consider --link is only useful as long as the linked container doesn't crash. The option to start a crashing node on its previous IP would have been a good workaround if supported, at least for a small/medium sized architecture.
Am I missing something obvious?
Jérôme had some good slides (11-33) on how ambassadors are better than other ways of service discovery (i.e. DNS, key-value stores, bind-mount config file, etc.) in his slide deck on "Shipping Applications to Production in Containers with Docker". He also has some suggestions for how to solve the problem I think you are mentioning, especially Docker Grand Ambassador looks promising.

Resources