How can docker service really scale in one machine? - docker

I can understand how it is helpful when scaling over multiple different machines.
But here we have just one single machine (or a node). However docker still supports scaling the service to run multiple tasks (each served by one container) like this:
docker service scale serviceName=num_of_replicas
Let's take an example of running a Web API. Really I don't see how scaling in this case can help. One machine hosting a web API can serve with its max power. Using multiple containers in it cannot help increase that maximum power. With the request handling pipeline of Web API, one server can handle multiple requests at the same time and independently as long as the server has enough resources (CPU, RAM). So we don't need multiple (unnecessary) tasks in this case with docker service scaling.
The only benefit I can see here is docker service scaling may provide a better isolation between tasks (containers) compared with serving all the requests by one same server (container).
Could you please let me know some other benefit of scaling docker service this way? Is there anything wrong with my assumption above?

Using multiple containers in it cannot help increase that maximum power.
That really depends on the implementation. Some non efficient implementations may use only single process/thread/cpu and scaling helps with their performance.
Another benefit: scaling on the single node will help also with high availability. There is always small nonzero chance for non recoverable error, out of memory issue, ... which may stop single container. So there will be downtime, until orchestration scheduler restarts container.

Related

Which architecture should I use to preinitialize N dockers in one server?

I want to create in just one server an application that runs inside a docker. I want the server to always have N active dockers running and if an N+1 user enters, then a new docker is launched.
I think that I need this architecture:
Nginx for load balancing
Kubernetes, to orchestrate the dockers
Docker, with the app inside this docker.
Is this correct? I am not sure if Kubernetes is what I need when I´m only going to use just one server.
You probably want to use the cluster autoscaler functionality of kubernetes. Docs here: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
Your approach of using users number as a metric will probably work, but maybe you are better off using generic metrics, such as the CPU consumption or memory availability on your node. For that you define the maximal consumption that your containers are allowed to use.

Do multiple processes(pods in this case) increase the rate of processing?

Assume a VM with 4 cores. I have a docker image which has a web application that provides some REST services. I am using K8S to deploy this application on that VM. So, is there any difference if I use a single pod on the single VM vs mutiple pods on the same host, in terms of performance.
For people who don't know K8S, assume we have some application that provides some REST services. Is there any advantage of using multiple instances of such application in terms of a performance increase like increased rate of serving requests ?
Personally, I think the performance had better to run multiple pods on the same host. I don't know what web server you use, but the requests are processed by limited cpu time, though it has multiple processes or threads for work. Additionally it's more efficient to utilize cpu time during network I/O waiting in using multiple processes. In order to improve the throughput, you should increase the processes or instances to work horizontally, because the response time is getting slower as time past.

Efficiency of horizontal scaling when Multiple Deployment of same container on a single Host

Do we get efficiency in terms of load handling when the same container (in this case the container has a apache server and a php application) is deployed 5 or more times (i.e. 5 or more containers are deployed) on the same Host or VM?
Here efficiency would mean whether the application in such an architecture is able to serve more requests or serve requests faster?
As far as i am aware, each request launches a new apache-php thread and if we have 5 containers handling the requests then will it be inefficient since now the threads launched by apache will be contextually be switched out more often?
Scaling an application requires understanding why the application has reached it's limit. For this, you need to gather metrics from the application and host when it is fully loaded. Without testing and gathering metrics, you're only guessing why you've at capacity.
If the application is fully utilizing one or more cpu cores, but not all of them, then it is either not multi threaded, or is encountering locks preventing all the cores from being used. Adding more containers to the host in this scenario may help scale.
Typically, horizontal scaling is done because a single host is using all of some resource, like disk io, network bandwidth, memory, or cpu. If you find that the app is using all of one or more of these resources when under heavy load, then you need more hosts, not more containers running on the same host.
This all assumes you haven't configured docker to limit resources on the containers. If you reach your capacity with one container, and have resource limits configured, then the easiest way to get further performance is to remove or reduce those limits.

How many containers should exist per host in production? How should services be split?

I'm trying to understand the benefits of Docker better and I am not really understanding how it would work in production.
Let's say I have a web frontend, a rest api backend and a db. That makes 3 containers.
Let's say that I want 3 of the front end, 5 of the backend and 7 of the db. (Minor question: Does it ever make sense to have less dbs than backend servers?)
Now, given the above scenario, if I package them all on the same host then I gain the benefit of efficiently using the resources of the host, but then I am DOA when that machine fails or has a network partition.
If I separate them into 1 full application (ie 1 FE, 1 BE & 1 DB) per host, and put extra containers on their own host, I get some advantages of using resources efficiently, but it seems to me that I still lose significantly when I have a network partition since it will take down multiple services.
Hence I'm almost leaning to the conclusion that I should be putting in 1 container per host, but then that means I am using my resources pretty inefficiently and then what are the benefits of containers in production? I mean, an OS might be an extra couple gigs per machine in storage size, but most cloud providers give you a minimum of 10 gigs storage. And let's face it, a rest api backend or a web front end is not gonna even come close to the 10 gigs...even including the OS.
So, after all that, I'm trying to figure out if I'm missing the point of containers? Are the benefits of keeping all containers of an application on 1 host, mostly tied to testing and development benefits?
I know there are benefits from moving containers amongst different providers/machines easily, but for the most part, I don't see that as a huge gain personally since that was doable with images...
Are there any other benefits for containers in production that I am missing? Or are the main benefits for testing and development? (Am I thinking about containers in production wrong)?
Note: The question is very broad and could fill an entire book but I'll shed some light.
Benefits of containers
The exciting part about containers is not about their use on a single host, but their use across hosts connected on a large cluster. Do not look at your machines as independent docker hosts, but as a pool of resource to host your containers.
Containers alone are not ground-breaking (ie. Docker's CTO stating at the last DockerCon that "nobody cares about containers"), but coupled to state of the art schedulers and container orchestration frameworks, they become a very powerful abstraction to handle production-grade software.
As to the argument that it also applies to Virtual Machines, yes it does, but containers have some technical advantage (See: How is Docker different from a normal virtual machine) over VMs that makes them convenient to use.
On a Single host
On a single host, the benefits you can get from containers are (amongst many others):
Use as a development environment mimicking the behavior on a real production cluster.
Reproducible builds independent of the host (convenient for sharing)
Testing new software without bloating your machine with packages you won't use daily.
Extending from a single host to a pool of machines (cluster)
When time comes to manage a production cluster, there are two approaches:
Create a couple of docker hosts and run/connect containers together "manually" through scripts or using solutions like docker-compose. Monitoring the lifetime of your services/containers is at your charge, and you should be prepared to handle service downtime.
Let a container orchestrator deal with everything and monitor the lifetime of your services to better cope with failures.
There are plenty of container orchestrators: Kubernetes, Swarm, Mesos, Nomad, Cloud Foundry, and probably many others. They power many large-scale companies and infrastructures, like Ebay, so they sure found a benefit in using these.
Pick the right replication strategy
A container is better used as a disposable resource meaning you can stop and restart the DB independently and it shouldn't impact the backend (other than throwing an error because the DB is down). As such you should be able to handle any kind of network partition as long as your services are properly replicated across several hosts.
You need to pick a proper replication strategy, to make sure your service stays up and running. You can for example replicate your DB across Cloud provider Availability Zones so that when an entire zone goes down, your data remains available.
Using Kubernetes for example, you can put each of your containers (1 FE, 1 BE & 1 DB) in a pod. Kubernetes will deal with replicating this pod on many hosts and monitor that these pods are always up and running, if not a new pod will be created to cope with the failure.
If you want to mitigate the effect of network partitions, specify node affinities, hinting the scheduler to place containers on the same subset of machines and replicate on an appropriate number of hosts.
How many containers per host?
It really depends on the number of machines you use and the resources they have.
The rule is that you shouldn't bloat a host with too many containers if you don't specify any resource constraint (in terms of CPU or Memory). Otherwise, you risk compromising the host and exhaust its resources, which in turn will impact all the other services on the machine. A good replication strategy is not only important at a single service level, but also to ensure good health for the pool of services that are sharing a host.
Resource constraint should be dealt with depending on the type of your workload: a DB will probably use more resources than your Front-end container so you should size accordingly.
As an example, using Swarm, you can explicitely specify the number of CPUs or Memory you need for a given service (See docker service documentation). Although there are many possibilities and you can also give an upper bound/lower bound in terms of CPU or Memory usage. Depending on the values chosen, the scheduler will pin the service to the right machine with available resources.
Kubernetes works pretty much the same way and you can specify limits for your pods (See documentation).
Mesos has more fine grained resource management policies with frameworks (for specific workloads like Hadoop, Spark, and many more) and with over-commiting capabilities. Mesos is especially convenient for Big Data kind of workloads.
How should services be split?
It really depends on the orchestration solution:
In Docker Swarm, you would create a service for each component (FE, BE, DB) and set the desired replication number for each service.
In Kubernetes, you can either create a pod encompassing the entire application (FE, BE, DB and the volume attached to the DB) or create separate pods for the FE, BE, DB+volume.
Generally: use one service per type of container. Regarding groups of containers, evaluate if it is more convenient to scale the entire group of container (as an atomic unit, ie. a pod) than to manage them separately.
Sum up
Containers are better used with an orchestration framework/platform. There are plenty of available solutions to deal with container scheduling and resource management. Pick one that might fit your use case, and learn how to use it. Always pick an appropriate replication strategy, keeping in mind possible failure modes. Specify resource constraints for your containers/services when possible to avoid resource exhaustion which could potentially lead to bringing a host down.
This depends on the type of application you run in your containers. From the top of my head I can think of a couple different ways to look at this:
is your application diskspace heavy?
do you need the application fail save on multiple machines?
can you run multiple different instance of different applications on the same host without decreasing performance of them?
do you use software like kubernetes or swarm to handle your machines?
I think most of the question are interesting to answer even without containers. Containers might free you of thinking about single hosts, but you still have to decide and measure the load of your host machines yourself.
Minor question: Does it ever make sense to have less dbs than backend servers?
Yes.
Consider cases where you hit normal(without many joins) SQL select statements to get data from the database but your Business Logic demands too much computation. In those cases you might consider keeping your Back-End Service count high and Database Service count low.
It all depends on the use case which is getting solved.
The number of containers per host depends on the design ratio of the host and the workload ratio of the containers. Both ratios are
Throughput/Capacity ratios. In the old days, this was called E/B for execution/bandwidth. Execution was cpu and banwidth was I/o. Solutions were said to be cpu or I/o bound.
Today memories are very large the critical factor is usually cpu/nest
capacity. We describe workloads as cpu intense or nest intense. A useful proxy for nest capacity is the size of highest level cache. A useful design ratio estimator is (clock x cores)/cache. Fir the same core count the machine with a lower design ratio will hold more containers. In part this is because the machine with more cache will scale better and see less saturation at higher utilization. By

In Docker Swarm mode is there any point in replicating a service more than the number of hosts available?

I have been looking into the new Docker Swarm mode that will be available in Docker 1.12. In this Docker Swarm Mode Walkthrough video, they create a simple Nginx service that is composed of a single Nginx container. In the video, they have 4 nodes in the Swarm cluster. During the scaling demonstration, they increase the replication factor to 10, thus creating 10 copies of the Nginx container across all 4 machines in the cluster.
I get that the video is just a demonstration, but in the real world, what is the point of creating more replicas of a container (or service) than there are nodes in the Swarm cluster? It seems to be pointless since two containers on the same machine would be sharing that machines finite computing resources anyway. I don't get what the benefit is.
So my question is, is there any real world benefit to replicating a Docker service or container beyond the number of nodes in the Swarm cluster?
Thanks
It depends on how the application handles threading and multiple requests. A single threaded application, or job that only handles one request at a time, may use a fraction of the OS resources and benefit from running multiple instances on a single host. An application that's been tuned to process requests concurrently and which fully utilizes the OS will see no benefit and will in fact incur a penalty of taking away resources to run multiple instances of the application.
One advantage can be performing live zero-downtime software updates. See the Docker 0.12rc2 Swarm tutorial on rolling updates
You have a RabbitMQ or other Queue System with a high load on data. You can start more Containers with workers than nodes to handle the high data load on your RabbitMQ.
Hardware resource constrain is not the only thing one needs to consider when you have your services replicated.
A simple example would be if you are having a service to provide security details. The resource consumption by this service will be low (read a record from Db/Cache and send it out). However if there are 20 or 30 requests to be handled by the same service the requests will be queued up.
Yes there are better ways to implement my example but I believe is good enough to illustrate why one might replicate a service on the same host/node.

Resources