Default docker swarm load distribution strategy within a single service - docker

I am new to docker-swarm and very much interested in knowing inner workings of how docker-swarm distributes the incoming requests for one single service.
For example, I deployed stack with one single service and 10 replicas across 2 nodes. when I brought up the node 5 containers did show up on node1 and other 5 on node2. Now, I make 10 http requests to the same service from 10 different browser instances, does each container end up with one request each? If it was a round-robin, I would think so. However, I am not observing the same behavior from the stack that I have just deployed.
I brought up the stack with configuration above and made 10 requests. When I did that load was mire distributed than concentrated but only 7 of the containers got 10 requests and 3 were free.
This tells me that it is not even-distribution round robin. If not, what algorithm does docker services api follow to determine which container will serve the next request?
When I searched for inner workings of docker swarm, I ended up with article here : https://docs.docker.com/engine/swarm/ingress/
which is interesting look into ingress and routing mesh but still does not answer my original question.
Anyone?

Related

Docker constraints until node going down

I'm having 5 Docker nodes in a cluster [swarm].
Let's say I'll constraint NGINX [it is not about nginx, it is just an example] to be deployed only on Docker node 1.
Can I create that constraint in such a way that if Docker node 1 goes down the constraint to not be available anymore?
Like, having that constraint only when the node is reacheable, when it isn't, automatically remove the constraint?
Yes, you can use the placement-ref to place a spread stratergy to your node.hostname=your.node1.hostname as document here
https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-placement-preferences---placement-pref.
If the nodes in one category (for example, those
with node.labels.datacenter=south) can’t handle their fair share of
tasks due to constraints or resource limitations, the extra tasks will
be assigned to other nodes instead, if possible.
The downside is that when your node 1 is back online, the service won't be update and rebalance until the service has been updated again (manually or the service is down).
Additionally, it's not a good design if your service has to be placed on a special node but it should be designed to be able to work every where so you can balance server load accross all nodes. Regarding, NGINX, it's stateless and you can deploy it to all of your nodes and let the docker routing mess to do the load balancing. If your service is statefull, even that it's re-deploy to a second node, your data will be not available and your total service is interupted too. So my real answer is that your question is possible but not the expectation of how Docker Swarm is designed and may be not good too.
If you have any good reason to stick with your question solution. You can think about a load balancer in front of your NGINX (or others) like another NGINX or HAProxy which will allow you more control to route your requests to a master node and use secondary or more node for backup purpose only and so on. The design will be that you have a stateless Load Balancer deploy in global mode, and your core service is running behind the LB. This design will give you no downtime when your node 1 is down or service is updating or relocating.

Does it make sense to cluster NodeJs (in order to take advantage of multiple CPUs) if will be deployed with orchestration tool like Kubernetes?

Right now I am struggling with debugging of NodeJs application which is clustered and is running on Docker. Found on this link and this information in it:
Remember, Node.js is still single-threaded in most cases, so even on a
single server you’ll likely want to spin up multiple container
replicas to take advantage of multiple CPU’s
So what does it mean, clustering of NodeJs app is pointless when it is meant to be deployed on Kubernetes ?
EDIT: I should also say that, by clustering I mean forking workers with cluster.fork() and goal of the application is to build simple REST API with high load traffic.
Short answer is yes..
Containers are just mini VM's and kubernetes is the orchestration tool that manages all the running 'containers', checking for health, resource allocation, load etc.
So, if you are running your node application in a container with an orchestration tool like kubernetes, then clustering is moot as each 'container' will be using 1 CPU or partial CPU depending on how you have it configured. Multiple containers essentially just place a new VM in rotation and kubernetes will direct traffic to each.
Now, when we talk about clustering node, that really comes into play when using tools like PM2, lets say you have a beefy server with 8 CPU's, node can only use 1 per instance so tools like PM2 setup a cluster and will route traffic along each of the running instances.
One thing to keep in mind though is that your application needs to be cluster OR container ready. Meaning nothing should be stored on the ephemeral disk as with each container restart that data is lost OR in a cluster situation there is no guarantee the folders will be available to each running instance and if you cluster with multiple servers etc you are asking for trouble :D ( this is where an object store would come into play like S3)

Service mesh with consul and docker swarm EE

I'm new in service mesh with Consul.
I found a lot of documentation about using Consul and Envoy for service mesh in K8S but I'm not finding much documentation about using it on docker swarm (Enterprise Edition).
My question is: is it possible to implement it on Docker Swarm EE? If not, what are the technical reasons that prevent or not recommend to implement it?
I wondered the same.
The main problem with docker swarm it seems is it lacks the concept of "sidecar" containers. For example, k8's has "pods". I haven't used k8's, but my understanding is that, you can group services into a unit called a "pod". This has benefits and really enables the mesh style architecture.. one reason is that services in the same "pod" can all communicate through "localhost" on different port bindings - i.e the services are "local" to eachother. When you want a "companion" service this is what you need as you know communicating with it is going to be fast as it is essentially local / co located with your app. Now consider swarm. You can add services to your stack, but you don't necessarily know where they are going to be placed - your "side car proxy" servcice could end up being placed on node 2 whilst your app is on node 1. This is not very efficient as it means there are now network hops to route traffic between your app and its "sidecar" proxy which could be on the other side of the data centre, but should really be local. So you start thinking of creative workarounds.. What about if I use "placement" settings to place my service and the sidecar service on the same node? Well then you lose the ability for swarm to place them on a different node if that node goes down, because your placement options have confined it to only one node. What if.. you deploy the "sidecar" proxy as a "global" service so that it is available on each node? Then your apps should all be able to communicate with the service via the IP address of whatever node its on.. but how do you configure that IP address per task (container)? I'm exploring that option, but then that gives you a single sidecar instance per node (1 instance to potentially serve many services) so this has impacts for how you scale that sidecar. I think possibly one other solution is that you have to embed these "sidecar" services into your own service docker image so that they are truly running locally with your app. However I haven't seen any that really advocate that approach so it's most likely fraught with hurdles to overcome. Most documentation is for k8s,, and nothing for swarm for these sorts of reasons. If only swarm could have added this ability in it's style of simplicity it would extend its reach so much.

Kubernetes scaling pods using custom algorithm

Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web and Mongo. Currently we run these containers on a single machine. However as our users are increasing we are looking for a solution to scale. Using Kubernetes we would form a multi container pod. If we are to replicate we need to replicate all 3 containers as a unit. Our cloud application is consumed by mobile app users. Our app can only handle approx 30000 users per Worker node and we intend to place a single pod on a single worker node. Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
We plan on using Kubernetes to manage the containers. Load balancing doesn't work for our use case as a mobile device needs to be tied to a single machine once assigned and each Pod works independently with its own persistent volume. However we need a way of spinning up new Pods on worker nodes if the number of users goes over 30000 and so on.
The idea is we have some sort of custom scheduler which assigns a mobile device a Worker Node ( domain/ IPaddress) depending on the number of users on that node.
Is Kubernetes a good fit for this design and how could we implement a custom pod scale algorithm.
Thanks
Piggy-Backing on the answer of Jonah Benton:
While this is technically possible - your problem is not with Kubernetes it's with your Application! Let me point you the problem:
Our cloud application consists of 3 tightly coupled Docker containers, Nginx, Web, and Mongo.
Here is your first problem: Is you can only deploy these three containers together and not independently - you cannot scale one or the other!
While MongoDB can be scaled to insane loads - if it's bundled with your web server and web application it won't be able to...
So the first step for you is to break up these three components so they can be managed independently of each other. Next:
Currently we run these containers on a single machine.
While not strictly a problem - I have serious doubt's what it would mean to scale your application and what the challenges that come with scalability!
Once a mobile device is connected to worker node it must continue to only use that machine ( unique IP address )
Now, this IS a problem. You're looking to run an application on Kubernetes but I do not think you understand the consequences of doing that: Kubernetes orchestrates your resources. This means it will move pods (by killing and recreating) between nodes (and if necessary to the same node). It does this fully autonomous (which is awesome and gives you a good night sleep) If you're relying on clients sticking to a single nodes IP, you're going to get up in the middle of the night because Kubernetes tried to correct for a node failure and moved your pod which is now gone and your users can't connect anymore. You need to leverage the load-balancing features (services) in Kubernetes. Only they are able to handle the dynamic changes that happen in Kubernetes clusters.
Using Kubernetes we would form a multi container pod.
And we have another winner - No! You're trying to treat Kubernetes as if it were your on-premise infrastructure! If you keep doing so you're going to fail and curse Kubernetes in the process!
Now that I told you some of the things you're thinking wrong - what a person would I be if I did not offer some advice on how to make this work:
In Kubernetes your three applications should not run in one pod! They should run in separate pods:
your webservers work should be done by Ingress and since you're already familiar with nginx, this is probably the ingress you are looking for!
Your web application should be a simple Deployment and be exposed to ingress through a Service
your database should be a separate deployment which you can either do manually through a statefullset or (more advanced) through an operator and also exposed to the web application trough a Service
Feel free to ask if you have any more questions!
Building a custom scheduler and running multiple schedulers at the same time is supported:
https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
That said, to the question of whether kubernetes is a good fit for this design- my answer is: not really.
K8s can be difficult to operate, with the payoff being the level of automation and resiliency that it provides out of the box for whole classes of workloads.
This workload is not one of those. In order to gain any benefit you would have to write a scheduler to handle the edge failure and error cases this application has (what happens when you lose a node for a short period of time...) in a way that makes sense for k8s. And you would have to come up to speed with normal k8s operations.
With the information provided, hard pressed to see why one would use k8s for this workload over just running docker on some VMs and scripting some of the automation.

In Docker Swarm mode is there any point in replicating a service more than the number of hosts available?

I have been looking into the new Docker Swarm mode that will be available in Docker 1.12. In this Docker Swarm Mode Walkthrough video, they create a simple Nginx service that is composed of a single Nginx container. In the video, they have 4 nodes in the Swarm cluster. During the scaling demonstration, they increase the replication factor to 10, thus creating 10 copies of the Nginx container across all 4 machines in the cluster.
I get that the video is just a demonstration, but in the real world, what is the point of creating more replicas of a container (or service) than there are nodes in the Swarm cluster? It seems to be pointless since two containers on the same machine would be sharing that machines finite computing resources anyway. I don't get what the benefit is.
So my question is, is there any real world benefit to replicating a Docker service or container beyond the number of nodes in the Swarm cluster?
Thanks
It depends on how the application handles threading and multiple requests. A single threaded application, or job that only handles one request at a time, may use a fraction of the OS resources and benefit from running multiple instances on a single host. An application that's been tuned to process requests concurrently and which fully utilizes the OS will see no benefit and will in fact incur a penalty of taking away resources to run multiple instances of the application.
One advantage can be performing live zero-downtime software updates. See the Docker 0.12rc2 Swarm tutorial on rolling updates
You have a RabbitMQ or other Queue System with a high load on data. You can start more Containers with workers than nodes to handle the high data load on your RabbitMQ.
Hardware resource constrain is not the only thing one needs to consider when you have your services replicated.
A simple example would be if you are having a service to provide security details. The resource consumption by this service will be low (read a record from Db/Cache and send it out). However if there are 20 or 30 requests to be handled by the same service the requests will be queued up.
Yes there are better ways to implement my example but I believe is good enough to illustrate why one might replicate a service on the same host/node.

Resources