Restrict memory across all containers - docker

I know you can set a memory restriction per container in docker via run -m <x>, but is it possible to set an aggregate restriction across all containers, rather than each container individually?
For example, if I have 5 containers and 2GB of RAM, is it possible to configure docker so that it can allocate in total no more than 1GB, meaning the sum of memory allocated to containers may not pass 1GB?

For now kubernetes does limiting only on container level via resources: limits parameter. And only for cpu and memory.
You could control how much memory/cpu a pod is using, since you define the pod. So, if you assign specific max usage for each container, the pod will not be able to use more resources then the sum of the individual ones.
This is not ideal, because you may want to let each container use as much memory as needed, but the pod to not get past a certain treshold. They have an issue opened for what you want here

Related

Docker containers and resource restrictions

I'm trying to understand if the following is possible. I have a single docker container (container_A) running on a host machine that also hosts a non docker service (service_B). I would like to set minimum and maximum CPU & memory resource usage for container_A however I want priority to go to service_B.
If service_B is receiving a lot of requests, I want container_A's resource limits dynamically reduced. I service_B is relatively quiet, I want container_A's limits dynamically increased.
I will probably be using a simple metric such as HOST_CPU_UTILISATION - CONTAINER_UTILISATION to determine if service_B should be getting priority, but what I don't know is if I can dynamically restrict container_A when service_B is busy.
In other words can I implement a rule that states container_A can utilise CPU & Memory that is available up to 80% of total CPU & Memory?
While I'm not currently using Docker Swarm or k8, I'd consider them if they gave me what I needed.

How do I limit container disk usage without evicting?

I'm trying to use Kubernetes on GKE (or EKS) to create Docker containers dynamically for each user and give users shell access to these containers and I want to be able to set a maximum limit on disk space that a container can use (or at least on one of the folders within each container) but I want to implement this in such a way that the pod isn't evicted if the size is exceeded. Instead ideally a user would get an error when trying to write more data to disk than the specified limit (e.g., Disk quota exceeded, etc).
I'd rather not have to use a Kubernetes volume that's backed by a gcePersistentDisk or an EBS volume to minimize the costs. Is there a way to achieve this with Kubernetes?
Assuming you're using emptyDir volume on Kubernetes, which is a temporary disk attached to your pod, you can set a size for that.
See the answer at https://stackoverflow.com/a/45565438/54929, this question is likely a duplicate.

How to set the CPU priority (niceness) of a Docker container?

One of my containers is always busy, and is taking CPU away from other containers (webservers) that need to be responsive and are only active from time to time.
I would like to lower the CPU priority of the CPU-consuming container, so that whenever the other containers need the CPU, it is not clogged.
How do I do this? I have been searching the web for a while now, but I can't find the answer.
I have tried running the container with --entrypoint='nice 10 mybinary', but it turns out --entrypoint can only run binaries, not shell commands.
You can limit CPU resources on the container level. I recommend to use --cpu-shares 512 for your case.
https://docs.docker.com/config/containers/resource_constraints/:
Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.
Setting the CPU shares is the most direct answer to your request, and typically preferred over adding capabilities to the container could be used by a malicious actor inside of the container to impact the host. The only reason I can think of to add the SYS_NICE capability to the container is if you have multiple processes inside the container and want to give different priorities to them, or need to change the priority while the container is running.
The more traditional solution to noisy neighbors is to configure each container with a limit on how much CPU and memory it is allowed to use. This is an upper bound, so realize there may be idle CPU resources if you set this low and do not have any other tasks available for the CPU to run.
The easiest way to set the limit on containers from the docker run command line is with --cpus which allows you to configure a fractional number of cores to be available to the container. Passing an option like --cpus 2.5 allows the container to use as many as 2.5 cores before the kernel scheduler throttles the process. If you had a 4 core host, that would ensure that at least 1.5 cores are always available to other processes.
Related to these limits, with Swarm Mode you can also configure a reservation for CPU (and memory). The reservation is a lower limit that Docker ensures has not been reserved for any other containers. This is used to select nodes to schedule containers, and may prevent some containers from being scheduled when there are not enough resources available, rather than scheduling so many jobs on a single node that it fails.
--cpu-shares looks like a good answer, although it's not clear to me how to verify it's working. I'm also curious what the max value is? Document doesn't say.
But, as an alternative for trusted containers, that same document also shows --cap-add=sys_nice that will allow changing process priorities within a container. i.e., if the nice or renice command is available within the container, it should work when you add the sys_nice capability. You'll only want to allow this capability for trusted containers because you don't want untrusted programs changing their own priorities willy nilly.
You can verify by inspecting the NI column for the process in question using top or ps -efl on the host.

One docker container per node or many containers per big node

We have a little farm of docker containers, spread over several Amazon instances.
Would it make sense to have fewer big host images (in terms of ram and size) to host multiple smaller containers at once, or to have one host instance per container, sized according to container needs?
EDIT #1
The issue here is that we need to decide up-front. I understand that we can decide later using various monitoring stats, but we need to make some architecture and infrastructure decisions before it is going to be used. More over, we do not have control over what content is going to be deployed.
You should read
An Updated Performance Comparison of Virtual Machines
and Linux Containers
http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
and
Resource management in Docker
https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/
You need to check how much memory, CPU, I/O,... your containers consume, and you will draw your conclusions
You can easily, at least, check a few things with docker stats and docker top my_container
the associated docs
https://docs.docker.com/engine/reference/commandline/stats/
https://docs.docker.com/engine/reference/commandline/top/

Can I ask for RAM of container exceed the physical memory on the single host with Docker-Swarm

I'm new to docker,and I want to build a docker cluster with docker-swarm.
In the link:https://docs.docker.com/swarm/scheduler/strategy/ I have a question:
Suppose I have 2 nodes with 2G RAM.What if I run a container ask for 3G RAM.Will it work?
Or there's another method?
Thanks.
If you do not set any user memory constraints at runtime, the process will be able to use as much memory as it wants, eventually swapping to disk like any other process would when there is no free physical memory on the host.

Resources