Single docker container is working good for less number of parellel processes but when we increase the number of parellel processes to 20-30 the process execution get slows. The processes are getting slow but still the docker is utilizing only 30-40% of cpu.
I have tried following things to make docker utilize proper cpu and don't slow down the processes -
I have explicitly allocated the cpu and ram to the docker container.
I have also increased the number of file descriptors, number of process and stack size using ulimit.
even after doing this two thing still the container is not utilizing cpu properly. I am using docker exec to start multiple processes in single running container. Is there any efficient way to use single docker container for executing multiple processes or to make container use 100% of cpu?
The configuration i am using is
Server - aws ec2 t2.2Xlarge ( 8 core, 32 gb ram)
Docker version - 18.09.7
Os- ubuntu 18.04
When you run something on machine it consumes following resources:1. CPU 2. RAM 3. DISK I/O 4. Network Bandwidth. If your container is exhausting any one resource listed above than it is possible that other resources available. So monitor your system matrices to find the root cause.
Related
I'm using KinD to create a local cluster and noted that the CPU percentage usage stays relatively high, between 40-60 for docker.hyperkit on Mac OS Catalina 10.15.6. Within Docker for mac I limited the resources to CPUs: 4 and Memory:6.00 GB.
My KinD cluster consists of a control plane node and three worker nodes. Is this CPU usage normal for docker for mac? Can I check to see the utilization per container?
Each kind "node" is a Docker container, so you can inspect those in "normal" ways.
Try running kind create cluster to create a single-node cluster. If you run docker stats you will get CPU, memory, and network utilization information; you can also get the same data through the Docker Desktop application, selecting (whale) > Dashboard. This brings up some high-level statistics on the container. Sitting idle on a freshly created cluster, this seems to be consistently using about 30% CPU for me. (So 40-60% CPU for a control-plane node and three workers sounds believable.)
Similarly, since each "node" is a container, you can docker exec -it kind-control-plane bash to get an interactive debugging shell in a node container. Once you're there, you can run top and similar diagnostic commands. On my single node I see the top processes as kube-apiserver (10%), kube-controller (5%), etcd (5%), and kubelet (5%). Again, that seems reasonably normal, though it might be nice if it used less CPU sitting idle.
We have a CentOS machine running Docker with a couple of containers. When running top, I see the process dockerd which sometimes is using a lot of cpu. Does this cpu utilization contain the cpu usage inside the containers?
I am using Docker 17.03 -ee version
I have to create a docker container with a variable amount of memory and variable number of cpus dynamically and this hardware should not be shared between the containers.
As an example, let us consider i have
Host with 8GB memory and 4 cores.
Create a docker container (d1) with 3GB memory and 1 cpus
Create a docker container (d2) with 5GB memory and 3 cpus
Create a docker contianer (d3) with 2GB memory and 2 cpus
I have noticed that docker run takes -m flag using which i can set the memory limit and it also has --cpuset-cpus using which i can assign specific cpu cores to my container.
I was able to create d1 and d2 by using the above flags. While creating i observed that i have to take core of core management i.e i have to maintain the assignment of a cores with containers.
Can i just give number of cores and core assignment is taken care automatically ?
After creating d1 and d2, as i have used up all the available memory and cpuset, i should not be allowed to create further containers.
But if i try to create a container d4 with memory 3GB i am able to successfully create it.
How can i allocate the memory to that specific container without sharing that memory with other containers ?
Is there any already built solution which takes care of managing memory assigned with a container and cpu-cores assigned with a container ? Or Do i have to maintain this information myself and ensure that i should not create new containers when all the memory or cores are used up ?
Since docker 1.13 you can use --cpus which is probably what you're looking for. Pre-1.13 you had to use --cpu-periodand --cpu-quota for which --cpus is actually a helper for.
To limit container to only use 1 core: --cpus=1.0, 2 cores --cpus=2.0 etc. Fractions are allowed as well. But note, that neither --cpus nor --cpuset-cpus does not reserve these cpus. It simply limits how much of cpu (or which cores) can be used but it cannot disallow any other containers/processes from using same resources.
Situation is similar with memory (--memory). You can declare max limit and container simply cannot allocate more resources than your limit. But it does not reserve or "pre-allocate" memory in any way.
If you're looking for a way to force that kind of behavior, you should look into container orchestration solutions, like kubernetes, openshift, docker swarm and such. These solutions were created specifically for a way to easily distribute containers in your cluster while respecting total cluster resources. Then you can simply specify that for example: container X should use max 2 cores, max 500MB memory and have 2 replicas. And if your cluster does not have enough resources, container will simply be left as pending.
Suppose there is a Docker container created with fixed memory configuration. Can I release some memory from it on the fly? Since we can do this with a simple command lxc-cgroup -n name memory.limit_in_bytes 256M in LXC.
You can't release container memory without either:
Restarting the container
Clearing the host's memory
This is because Docker container's are represented as processes on the host system. Therefore you would need to free the memory associated with the container process. As far as I understand this is difficult especially since processes may rely on shared memory structures etc.
If you really need memory released you can use the following commands to help clear the hosts memory, assuming you are on Linux:
Flush data in memory buffers to disk:
$ sync
Clear OS page cache:
$ echo 1 > /proc/sys/vm/drop_caches
A container on it's own doesn't use much memory. They are very light weight by design. Think of it as a fenced off area on your host, just a bit of tape to divide your container from the rest of the host.
The kernel has to house the cgroups, namespaces, interfaces, devices etc but this is negligible.
Docker itself will also introduce a small amount of overhead for it's management of the container but that is normally negligible (Unless you, say, map a thousand ports into a container).
The main use of memory will be the processes that run in the container. A container won't use 256MB if you set that as the upper limit, that will just be the upper limit before allocations start failing.
If you want to release used memory, either the process you are running in the container needs to have that ability, or you can restart the process using the memory.
My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...