I dig into Kubernetes resource restrictions and have a hard time to understand what CPU limits are for. I know Kubernetes passes requests and limits down to the (in my case) Docker runtime.
Example: I have 1 Node with 1 CPU and 2 Pods with CPU requests: 500m and limits: 800m. In Docker, this results in (500m -> 0.5 * 1024 = 512) --cpu-shares=512 and (800m -> 800 * 100) --cpu-quota=80000. The pods get allocated by Kube scheduler because the requests sum does not exceed 100% of the node's capacity; in terms of limits the node is overcommited.
The above allows each container to get 80ms CPU time per 100ms period (the default). As soon as the CPU usage is 100%, the CPU time is shared between the containers based on their weight, expressed in CPU shares. Which would be 50% for each container according to the base value of 1024 and a 512 share fo each. At this point - in my understanding - the limits have no more relevance because none of the containers can get its 80ms anymore. They both would get 50ms. So no matter how much limits I define, when usage reaches critical 100%, it's partitioned by requests anyway.
This makes me wonder: Why should I define CPU limits in the first place, and does overcommitment make any difference at all? requests on the other hand in terms of "how much share do I get when everything is in use" is completely understandable.
One reason to set CPU limits is that, if you set CPU request == limit and memory request == limit, your pod is assigned a Quality of Service class = Guaranteed, which makes it less likely to be OOMKilled if the node runs out of memory. Here I quote from the Kubernetes doc Configure Quality of Service for Pods:
For a Pod to be given a QoS class of Guaranteed:
Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
Every Container in the Pod must have a CPU limit and a CPU request, and they must be the same.
Another benefit of using the Guaranteed QoS class is that it allows you to lock exclusive CPUs for the pod, which is critical for certain kinds of low-latency programs. Quote from Control CPU Management Policies:
The static CPU management policy allows containers in Guaranteed pods with integer CPU requests access to exclusive CPUs on the node. ... Only containers that are both part of a Guaranteed pod and have integer CPU requests are assigned exclusive CPUs.
According to the Motivation for CPU Requests and Limits section of the Assign CPU Resources to Containers and Pods Kubernetes walkthrough:
By having a CPU limit that is greater than the CPU request, you
accomplish two things:
The Pod can have bursts of activity where it makes use of CPU resources that happen to be available.
The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount.
I guess that might leave us wondering why we care about limiting the burst to "some reasonable amount" since the very fact that it can burst seems to seems to suggest there are no other processes contending for CPU at that time. But I find myself dissatisfied with that line of reasoning...
So first off I checked out the command line help for the docker flags you mentioned:
--cpu-quota int Limit CPU CFS (Completely Fair Scheduler) quota
-c, --cpu-shares int CPU shares (relative weight)
Reference to the Linux Completely Fair Scheduler means that in order to understand the value of CPU limit/quota we need to undestand how the underlying process scheduling algorithm works. Makes sense, right? My intuition is that it's not as simple as time-slicing CPU execution according to the CPU shares/requests and allocating whatever is leftover at the end of some fixed timeslice on a first-come, first-serve basis.
I found this old Linux Journal article snippet which seems to be a legit description of how CFS works:
The CFS tries to keep track of the fair share of the CPU that would
have been available to each process in the system. So, CFS runs a fair
clock at a fraction of real CPU clock speed. The fair clock's rate of
increase is calculated by dividing the wall time (in nanoseconds) by
the total number of processes waiting. The resulting value is the
amount of CPU time to which each process is entitled.
As a process waits for the CPU, the scheduler tracks the amount of
time it would have used on the ideal processor. This wait time,
represented by the per-task wait_runtime variable, is used to rank
processes for scheduling and to determine the amount of time the
process is allowed to execute before being preempted. The process with
the longest wait time (that is, with the gravest need of CPU) is
picked by the scheduler and assigned to the CPU. When this process is
running, its wait time decreases, while the time of other waiting
tasks increases (as they were waiting). This essentially means that
after some time, there will be another task with the largest wait time
(in gravest need of the CPU), and the currently running task will be
preempted. Using this principle, CFS tries to be fair to all tasks and
always tries to have a system with zero wait time for each
process—each process has an equal share of the CPU (something an
“ideal, precise, multitasking CPU” would have done).
While I haven't gone as far as to dive into the Linux kernel source to see how this algorithm actually works, I do have some guesses I would like to put forth as to how shares/requests and quotas/limits play into this CFS algorithm.
First off, my intuition leads me to believe that different processes/tasks accumulate wait_runtime at different relative rates based on their assigned CPU shares/requests since Wikipedia claims that CFS is an implementation of weighted fair queuing and this seems like a reasonable way to achieve a shares/request based weighting in the context of an algorithm that attempts to minimize the wait_runtime for all processes/tasks. I know this doesn't directly speak to the question that was asked, but I want to be sure that my explanation as a whole has a place for both concepts of shares/requests and quotas/limits.
Second, with regard to quotas/limits I intuit that these would be applicable in situations where a process/task has accumulated a disproportionately large wait_runtime while waiting on I/O. Remember that the quoted description above CFP prioritizes the process/tasks with the largest wait_runtime? If there were no quota/limit on a given process/task then it seems to me like a burst of CPU usage on that process/task would have the effect of, for as long as it takes for its wait_runtime to reduce enough that another task is allowed to preempt it, blocking all other processes/tasks from execution.
So in other words, CPU quotas/limits in Docker/Kubernetes land is a mechanism that allows the given container/pod/process to burst in CPU activity to play catch up to other processes after waiting on I/O (rather than CPU) without in the course of doing so unfairly blocking other processes from also doing work.
There is no upper bound with just cpu shares. If there are free cycles, you are free to use them. limit is imposed so that one rogue process is not holding up the resource forever.
There should be some fair scheduling. CFS imposes that using cpu quota and cpu period via the limit attribute configured here.
To conclude, this kind of property ensures that when I schedule your task you get a minimum of 50 milliseconds to finish it. If you need more time, then if no one is waiting in the queue I would let you run for few more but not more than 80 milliseconds.
I think it's correct that, during periods where the Node's CPU is being fully utilized, it's the requests (CPU shares) that will determine how much CPU time each container gets, rather than the limits (which are effectively moot at that point). In that sense, a rogue process can't do unlimited damage (by depriving another of its requests).
However, there are still two broad uses for limits:
If you don't want a container to be able to use more than a fixed amount of CPU even if extra CPU is available on the Node. It might seem weird that you wouldn't want excess CPU to be utilized, but there are use cases for this. Some that I've heard:
You're charging customers for the right to use up to x amount of compute resources (a limit), so you don't want to give them more sometimes for free (which might dissuade them from paying for a higher tier on your service).
You're trying to figure out how a service will perform under load, but this gets complicated/unpredictable, because the performance during your load testing depends on how much spare CPU is lying around that the service is able to utilize (which might be a lot more than the spare CPU that'll actually be on the Node during a real-world high-load situation). This is mentioned here as a big risk.
If the requests on all the containers aren't set especially accurately (as is often the case; devs might set the values upfront and forget to update them as the service evolves, or not even set them very carefully initially). In these cases, things sometimes still function well enough if there's enough slack on the Node; limits can then be useful to prevent a buggy workload from eating all the slack and forcing the other pods back to their incorrectly-set(!) requested amounts.
Related
As the image shows that, as the memory capacity increases the accessing time is also increasing.
Does it make sense that, accessing time is dependent on the memory capacity..???
No. The images show that technologies with lower cost in $ / GB are slower. Within a certain level (tier of the memory hierarchy), performance is not dependent on size. You can build systems with wider busses and so on to get more bandwidth out of a certain tier, but it's not inherently slower to have more.
Having more disks or larger disks doesn't make disk access slower, they're close to constant latency determined by the nature of the technology (rotating platter).
In fact, larger-capacity disks tend to have better bandwidth once they do seek to the right place, because more bits per second are flying under the read / write heads. And with multiple disks you can run RAID to utilize multiple disks in parallel.
Similarly for RAM, having multiple channels of RAM on a big many-core Xeon increases aggregate bandwidth. (But unfortunately hurts latency due to a more complicated interconnect vs. simpler quad-core "client" CPUs: Why is Skylake so much better than Broadwell-E for single-threaded memory throughput?) But that's a sort of secondary effect, and just using RAM with more bits per DIMM doesn't change latency or bandwidth, assuming you use the same number of DIMMs in the same system.
I have some upstream flask containers and the CPU usage hit 100% percent when i entertain some requests.
the system shows that the containers are using your CPU 100%.
My questions are:
If i limit the CPU usage on these containers, will they exit with zero error if they hit there allocated resources OR what are the disadvantages of limiting resources against docker containers?
which one is the better approach in terms of resource allocation to docker containers? (For 6 cpu cores)
a) Two containers running with default settings. (Use as much resources as the kernel can provide may be)
b) 4 containers can only use 1 CPU (--limit cpus ='1')
Please let me know if you want me elaborate more.
Thanks in Advance
Containers (and other Linux processes) that try to use more CPU cycles than they have been allocated will just get throttled: the Linux kernel will schedule other processes instead. Going over your CPU limit has no adverse consequences other than your process running slower.
For example, say your program starts 4 threads and each runs some intensive computation using a full core, but you're running this in a Docker container with --cpus=2. All four threads will run, but the combined program will be limited to 200% CPU, and the overall performance will be similar to if you had only launched 2 threads.
You will usually get better overall system utilization if you don't explicitly limit CPU utilization. If you are running 4 containers, and one of them is running the 4-thread computation job described above but the other three are idle, you will fully use the available system resources if you don't have limits.
If you do have a specific computationally intensive container, you may want to limit its CPU utilization to not starve out other processes. If you only have the one worker container and three Web server containers, consider limiting the worker to 3 or 3.5 CPUs on a 4-core system to guarantee some spare cycles for HTTP traffic. This is a tuning optimization, so look into it only if you're seeing a problem.
Note that CPU and memory work differently. You can't really use "too much" CPU, since if you wait there will always be more CPU cycles, but the kernel rations out what your process is able to run. On the other hand, memory is fixed, and your process will get killed if it goes over a memory limit.
I noticed there is an option that allows specifying a machine type.
What is the criteria I should use to decide whether to override the default machine type?
In some experiments I saw that throughput is better with smaller instances, but on the other hand jobs tend to experience more "system" failures when many small instances are used instead of a smaller number of default instances.
Thanks,
G
Dataflow will eventually optimize the machine type for you. In the meantime here are some scenarios I can think of where you might want to change the machine type.
If your ParDO operation needs a lot of memory you might want to change the machine type to one of the high memory machines that Google Compute Engine provides.
Optimizing for cost and speed. If your CPU utilization is less than 100% you could probably reduce the cost of your job by picking a machine with fewer CPUs. Alternatively, if you increase the number of machines and reduce the number of CPUs per machine (so total CPUs stays approximately constant) you can make your job run faster but cost approximately the same.
Can you please elaborate more on what type of system failures you are seeing? A large class of failures (e.g. VM interruptions) are probabilistic so you would expect to see a larger absolute number of failures as the number of machines increases. However, failures like VM interruptions should be fairly rare so I'd be surprised if you noticed an increase unless you were using order of magnitude more VMs.
On the other hand, its possible you are seeing more failures because of resource contention due to the increased parallelism of using more machines. If that's the case we'd really like to know about it to see if this is something we can address.
I am confused by hadoop namenode memory problem.
when namenode memory usage is higher than a certain percentage (say 75%), reading and writing hdfs files through hadoop api will fail (for example, call some open() will throw exception), what is the reason? Does anyone has the same thing?
PS.This time the namenode disk io is not high, the CPU is relatively idle.
what determines namenode'QPS (Query Per Second) ?
Thanks very much!
Since the namenode is basically just a RPC Server managing a HashMap with the blocks, you have two major memory problems:
Java HashMap is quite costly, its collision resolution (seperate chaining algorithm) is costly as well, because it stores the collided elements in a linked list.
The RPC Server needs threads to handle requests- Hadoop ships with his own RPC framework and you can configure this with the dfs.namenode.service.handler.count for the datanodes it is default set to 10. Or you can configure this dfs.namenode.handler.count for other clients, like MapReduce jobs, JobClients that want to run a job. When a request comes in and it want to create a new handler, it go may out of memory (new Threads are also allocating a good chunk of stack space, maybe you need to increase this).
So these are the reasons why your namenode needs so much memory.
What determines namenode'QPS (Query Per Second) ?
I haven't benchmarked it yet, so I can't give you very good tips on that. Certainly fine tuning the handler counts higher than the number of tasks that can be run in parallel + speculative execution.
Depending on how you submit your jobs, you have to fine tune the other property as well.
Of course you should give the namenode always enough memory, so it has headroom to not fall into full garbage collection cycles.
Azure embraces the notion of elastic scaling and I've been able to acheive this with my Worker Roles. However, when it comes to my Web Roles (e.g. MVC Apps) I am not sure what to monitor (or how) to determine when its a good time to increase (or descrease) the number of running instances. I'm assuming I need to monitor one or many Performance Counters but not sure where to start.
Can anyone recommend a best practice for assessing an MVC Web Role instances load relative to scaling decisions?
This question is a bit open-ended, as monitoring is typically app-specific. Having said that:
Start with simple measurements that you'd look at on a local server, representing KPIs for your app. For instance: Maybe look at network utilization. This TechNet article describes performance counters collected by System Center for Windows Azure. For instance:
ASP.NET Applications Requests/sec
Network Interface Bytes
Received/sec
Network Interface Bytes Sent/sec
Processor % Processor Time Total
LogicalDisk Free Megabytes
LogicalDisk % Free Space
Memory Available Megabytes
You may also want to watch # of requests queued and request wait time.
Network utilization is interesting, since your NIC provides approx. 100Mbps per core and could end up being a bottleneck even when CPU and other resources are underutilized. You may need to scale out to more instances to handle high-bandwidth scenarios.
Also: I tend to give less importance to CPU utilization, even though it's so easy to measure (and shows up so frequently in examples). Running a CPU at near capacity is a good thing usually, since you're paying for it and might as well use as much as possible.
As far as decreasing: This needs to be handled a bit more carefully. Windows Azure compute is billed by the hour. If, say, you scale out to an extra instance at 11:50 and scale in again at 12:10, you've just incurred two cpu-hours. Also: You don't want to scale out, then take new measurements and deciding you can now scale back again (effectively creating a constant pulse of adding and decreasing instances). To make things easier, consider the Autoscaling Application Block (WASABi), found in the Enterprise Library. This has all the scale rules baked in (such as the ones I just mentioned) and is very straightforward to use.