I'm trying to understand if the following is possible. I have a single docker container (container_A) running on a host machine that also hosts a non docker service (service_B). I would like to set minimum and maximum CPU & memory resource usage for container_A however I want priority to go to service_B.
If service_B is receiving a lot of requests, I want container_A's resource limits dynamically reduced. I service_B is relatively quiet, I want container_A's limits dynamically increased.
I will probably be using a simple metric such as HOST_CPU_UTILISATION - CONTAINER_UTILISATION to determine if service_B should be getting priority, but what I don't know is if I can dynamically restrict container_A when service_B is busy.
In other words can I implement a rule that states container_A can utilise CPU & Memory that is available up to 80% of total CPU & Memory?
While I'm not currently using Docker Swarm or k8, I'd consider them if they gave me what I needed.
Related
One of my containers is always busy, and is taking CPU away from other containers (webservers) that need to be responsive and are only active from time to time.
I would like to lower the CPU priority of the CPU-consuming container, so that whenever the other containers need the CPU, it is not clogged.
How do I do this? I have been searching the web for a while now, but I can't find the answer.
I have tried running the container with --entrypoint='nice 10 mybinary', but it turns out --entrypoint can only run binaries, not shell commands.
You can limit CPU resources on the container level. I recommend to use --cpu-shares 512 for your case.
https://docs.docker.com/config/containers/resource_constraints/:
Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.
Setting the CPU shares is the most direct answer to your request, and typically preferred over adding capabilities to the container could be used by a malicious actor inside of the container to impact the host. The only reason I can think of to add the SYS_NICE capability to the container is if you have multiple processes inside the container and want to give different priorities to them, or need to change the priority while the container is running.
The more traditional solution to noisy neighbors is to configure each container with a limit on how much CPU and memory it is allowed to use. This is an upper bound, so realize there may be idle CPU resources if you set this low and do not have any other tasks available for the CPU to run.
The easiest way to set the limit on containers from the docker run command line is with --cpus which allows you to configure a fractional number of cores to be available to the container. Passing an option like --cpus 2.5 allows the container to use as many as 2.5 cores before the kernel scheduler throttles the process. If you had a 4 core host, that would ensure that at least 1.5 cores are always available to other processes.
Related to these limits, with Swarm Mode you can also configure a reservation for CPU (and memory). The reservation is a lower limit that Docker ensures has not been reserved for any other containers. This is used to select nodes to schedule containers, and may prevent some containers from being scheduled when there are not enough resources available, rather than scheduling so many jobs on a single node that it fails.
--cpu-shares looks like a good answer, although it's not clear to me how to verify it's working. I'm also curious what the max value is? Document doesn't say.
But, as an alternative for trusted containers, that same document also shows --cap-add=sys_nice that will allow changing process priorities within a container. i.e., if the nice or renice command is available within the container, it should work when you add the sys_nice capability. You'll only want to allow this capability for trusted containers because you don't want untrusted programs changing their own priorities willy nilly.
You can verify by inspecting the NI column for the process in question using top or ps -efl on the host.
So I have a worker docker images. I want to spin up a network of 500-50000 nodes to emulate what happens to a private blockchain such as etherium on different scales. What would be a recomendation for an opensource tool/library for such job:
a) one that would make sure that even on a low-endish (say one 40 cores node) all workers will be moved forward in time equaly (not realtime)
b) would allow (a) in a distributed setting (say 10 low-endish nodes on a single lan)
In other words I do not seek for realtime network emulation, so I can wait for 10 hours to simulate 1 minute and it would be good enough fro me. I thought about Kathara yet a problem still stands - how to make sure that say 10000 containers are given the same amount of ticks in a round-robin manner?
So how to emulate a complex network of docker workers?
I'm taking the assumption that you will run each inside of a container. To ensure each container runs with similar CPU access, you can configure CPU reservations and limits on each replica. These numbers get computed down to fractional slices of a core, so on an 8 core system, you could give each container 0.01 of a core to run upwards of 800 containers. See the compose documentation on how to set resource constraints. And with swarm mode, you could spread these replicas across multiple nodes, sharing a network.
That said, I think the advice to run shorter simulations on more hardware is good. You will find a significant portion of the time is spent in context switching between each process, possibly invalidating any measurements you want to take.
You will also encounter scalability issues with docker and the orchestration tool you choose. For example, you'll need to adjust the subnet size for any shared network which defaults to a /24 with around 253 available IP's. The docker engine itself will likely be spending a non-trivial amount of CPU time maintaining the state for all of the running containers.
Do we get efficiency in terms of load handling when the same container (in this case the container has a apache server and a php application) is deployed 5 or more times (i.e. 5 or more containers are deployed) on the same Host or VM?
Here efficiency would mean whether the application in such an architecture is able to serve more requests or serve requests faster?
As far as i am aware, each request launches a new apache-php thread and if we have 5 containers handling the requests then will it be inefficient since now the threads launched by apache will be contextually be switched out more often?
Scaling an application requires understanding why the application has reached it's limit. For this, you need to gather metrics from the application and host when it is fully loaded. Without testing and gathering metrics, you're only guessing why you've at capacity.
If the application is fully utilizing one or more cpu cores, but not all of them, then it is either not multi threaded, or is encountering locks preventing all the cores from being used. Adding more containers to the host in this scenario may help scale.
Typically, horizontal scaling is done because a single host is using all of some resource, like disk io, network bandwidth, memory, or cpu. If you find that the app is using all of one or more of these resources when under heavy load, then you need more hosts, not more containers running on the same host.
This all assumes you haven't configured docker to limit resources on the containers. If you reach your capacity with one container, and have resource limits configured, then the easiest way to get further performance is to remove or reduce those limits.
I'm doing an internship focused on Docker and I have to load-balance an application which have a client, a server and a database. My goal is to dynamically scale the number of server containers according their CPU usage. For instance if the CPU usage is over 60% I add a new container on the fly to divide the CPU usage. My problem is that my simulation does not get the CPU usage higher than 20%, it is a very simple simulation where a random users register and go to random pages.
Question : How can I lower the CPU capacity of my server containers using my docker-compose file in order to artificially make the CPU go higher ? I tried to use the cpu_quota and cpu_shares instructions but it's not very documented and I don't know how it works or affects my containers.
I know you can set a memory restriction per container in docker via run -m <x>, but is it possible to set an aggregate restriction across all containers, rather than each container individually?
For example, if I have 5 containers and 2GB of RAM, is it possible to configure docker so that it can allocate in total no more than 1GB, meaning the sum of memory allocated to containers may not pass 1GB?
For now kubernetes does limiting only on container level via resources: limits parameter. And only for cpu and memory.
You could control how much memory/cpu a pod is using, since you define the pod. So, if you assign specific max usage for each container, the pod will not be able to use more resources then the sum of the individual ones.
This is not ideal, because you may want to let each container use as much memory as needed, but the pod to not get past a certain treshold. They have an issue opened for what you want here