Limiting a Docker Container to a single cpu core - docker

I'm trying to build a system which runs pieces of code in consistent conditions, and one way I imagine this being possible is to run the various programs in docker containers with the same layout, reserving the same amount of memory, etc. However, I can't seem to figure out how to keep CPU usage consistent.
The closest thing I can seem to find are "cpu shares," which, if I understand the documentation, limit cpu usage with respect to what other containers/other processes are running on the system, and what's available on the system. They do not seem to be capable of limiting the container to an absolute amount of cpu usage.
Ideally, I'd like to set up docker containers that would be limited to using a single cpu core. Is this at all possible?

If you use a newer version of Docker, you can use --cpuset-cpus="" in docker run to specify the CPU cores you want to allocate:
docker run --cpuset-cpus="0" [...]
If you use an older version of Docker (< 0.9), which uses LXC as the default execution environment, you can use --lxc-conf to configure the allocated CPU cores:
docker run --lxc-conf="lxc.cgroup.cpuset.cpus = 0" [...]
In both of those cases, only the first CPU core will be available to the docker container. Both of these options are documented in the docker help.

I've tried to provide a tutorial on container resource alloc.
https://gist.github.com/afolarin/15d12a476e40c173bf5f

Related

Setting CPU and Memory limits globally for all docker containers

There are many examples which talks about setting memory, cpu etc using docker run. Is it possible to set it globally so every container created with those values?
There should be other ways like using AppArmor maybe, I'll check, but the first thing that comes to my mind is this project from a friend
Docker enforcer
https://github.com/piontec/docker-enforcer
This project is a docker plugin that will kill containers if they don't meet certain pre-defined policies such as having strict memory and cpu limits

How does host machine's CPU utilized by docker containers and other applications running on host?

I am running a micro-service application in docker container and have to test that using JMeter tool. So I am running JMeter on my host machine and my host machine has 4 cores. I allocate 2 cores to the container using --cpu=2 flag while running the container. so it means it can use up to 2 cores as per it needs while running. I leave the remaining 2 cores for the JMeter and other applications and system usage.
Here I need a clarification that what will happen if JMeter and other application needs more than 2 cores and container also needs allocated 2 cores fully ?
Is there any way to allocate 2 cores fully to the container? (It means any other applications or system can't use that 2 cores)
Thank you in advance.
The answer is most probably "no", the explanations will differ depending on your operating system.
You can try to implement this by playing with CPU affinity, however CPU is not only one metric you should be looking at, I would rather be concerned about RAM and Disk usage.
In general having load generator and application under test on the same physical machine is a very bad idea because they are both very resource intensive so consider using 2 separate machines for this otherwise both will suffer from context switches and you will not be able to monitor resources usage of JMeter and the application under test using JMeter PerfMon Plugin

Can docker share memory and CPU between containers as needed?

If I am running multiple docker containers with bursty memory and CPU utilization, will they be able to use the full capacity of the host machine? Or will they be limited to their CPU and memory limits of the individual container definitions?
For example:
If I were running 3 containers that burst to 1GB of memory once per day, at disjoint times.
And similarly if those same containers instead were CPU heavy, and bursted to 1CPU unit per day at disjoint times.
Could I run those 3 containers on a box with only 1.1GB of memory, or 1.1 CPU unit respectively?
Docker containers are not VM's,
They run in a cage over the host OS kernel, so there's no hypervisor magic behind.
Processes running inside a container are not much different from host processes from a kernel point of view. They are just highly isolated.
Memory and cpu scheduling will be handled by the "host". What you set on docker settings are CPU shares, to give priority and bounds to some containers.
So yes, containers with sleeping processes won't consume much cpu/memory if the used memory is correctly freed after the processing spike, otherwise, that memory would be swapped out, with no much performance impact.
Instantiating a docker container will only consume memory resources. As long as no process is running, you will see zero cpu usage by it.
I would recommend reviewing cgroups documentation, and actually docs for cgroups v2, since they are better structured that v1 docs. See chapter 5 for cpu and memory controllers: https://www.kernel.org/doc/Documentation/cgroup-v2.txt
When you don't need to explicitly specify --memory and --cpu-shares option at the container startup, the container will have all the cpu share and memory available for use on the instance. If no other process is consuming the resources,then the container can use all the cpu and memory available.
In theory you should be able to run the 3 containers on the instance.
Make sure non of the containers tie up the memory or cpu resources.

Using Docker to load Memory Image?

As far as I understand Docker it is virtualizing a system and loads a certain image along with booting it and doing some other stuff. Since I can use different OS with docker, I think it is quite far reaching in order to provide such an abstraction.
In order to speed up setting up a test environment, is it possible to freeze a docker instance in a certain state (like after initializing the database) and rerun the image from this point?
Docker is not virtualizing a system and boots it. Instead of loading its own system kernel into memory it simply creates encapsulated processes that run in the Linux kernel of the host system. That is by the way the reason why a Linux host is required.
There is no virtualization but just process/resource encapsulation. More details about the Docker architecture and its concepts you can find in the documentation.
A "freeze" would be a commit of your base image which you used to run your container. You can get back to that commit at any point in time by using the image id.

docker metrics reside in different location for different environment or versions

I need to gather docker metrics like cpu, memory and I/O, but I noticed that on my Ubuntu 14.04 the location of the metrics are different from the location in my CoreOs system:
For example:
The docker cpu metrics in ubuntu are located under:
/sys/fs/cgroup/cpuacct/docker/<dockerLongId>/cpuacct.stat
The docker cpu metrics for CoreOs are located under:
/sys/fs/cgroup/cpuacct/system.slice/docker-<dockerLongId>.scope/cpuacct.stat
do you have an idea what will be the best way to support both environments?
There are numerous issues with this. To start with the CoreOS vs Ubuntu part, this is due to the fact that on Ubuntu systemd slices are not used.
man systemd.slice
In the end, control groups are designed to be configurable. At any given time a process could be reconfigured by moving the PID between different cgroups. Inherently there will be a small amount of unpredictable behavior. Those patterns should be stable for processes started by their respective init systems.
The best way to detect which method should be used would be to read /etc/os-release. The purpose of this file is to provide a stable method for determining not only the distro, but the version as well.

Resources