I am using Docker 17.03 -ee version
I have to create a docker container with a variable amount of memory and variable number of cpus dynamically and this hardware should not be shared between the containers.
As an example, let us consider i have
Host with 8GB memory and 4 cores.
Create a docker container (d1) with 3GB memory and 1 cpus
Create a docker container (d2) with 5GB memory and 3 cpus
Create a docker contianer (d3) with 2GB memory and 2 cpus
I have noticed that docker run takes -m flag using which i can set the memory limit and it also has --cpuset-cpus using which i can assign specific cpu cores to my container.
I was able to create d1 and d2 by using the above flags. While creating i observed that i have to take core of core management i.e i have to maintain the assignment of a cores with containers.
Can i just give number of cores and core assignment is taken care automatically ?
After creating d1 and d2, as i have used up all the available memory and cpuset, i should not be allowed to create further containers.
But if i try to create a container d4 with memory 3GB i am able to successfully create it.
How can i allocate the memory to that specific container without sharing that memory with other containers ?
Is there any already built solution which takes care of managing memory assigned with a container and cpu-cores assigned with a container ? Or Do i have to maintain this information myself and ensure that i should not create new containers when all the memory or cores are used up ?
Since docker 1.13 you can use --cpus which is probably what you're looking for. Pre-1.13 you had to use --cpu-periodand --cpu-quota for which --cpus is actually a helper for.
To limit container to only use 1 core: --cpus=1.0, 2 cores --cpus=2.0 etc. Fractions are allowed as well. But note, that neither --cpus nor --cpuset-cpus does not reserve these cpus. It simply limits how much of cpu (or which cores) can be used but it cannot disallow any other containers/processes from using same resources.
Situation is similar with memory (--memory). You can declare max limit and container simply cannot allocate more resources than your limit. But it does not reserve or "pre-allocate" memory in any way.
If you're looking for a way to force that kind of behavior, you should look into container orchestration solutions, like kubernetes, openshift, docker swarm and such. These solutions were created specifically for a way to easily distribute containers in your cluster while respecting total cluster resources. Then you can simply specify that for example: container X should use max 2 cores, max 500MB memory and have 2 replicas. And if your cluster does not have enough resources, container will simply be left as pending.
Related
I easily managed to do this on the desktop version of Docker via preferences, but how can I do this using console on a remote linux server?
The limits you are configuring in the Docker Desktop UI are on the embedded Linux VM. All containers run within that VM, giving you an upper limit on the sum of all containers. To replicate this on a remote Linux server, you would set the physical hardware or VM constraints to match your limit.
For individual containers, you can specify the following:
--cpus to set the CPU shares allocated to the cgroup. This can be something like 2.5 to allocate up to 2.5 CPU threads to the container. Containers attempting to use more CPU will be throttled.
--memory or -m to set the memory limit in bytes. This is applied to the cgroup the container runs within. Containers attempting to exceed this limit will be killed.
Disk space for containers and images is controlled by the disk space available to /var/lib/docker for the default overlay2 graph driver. You can limit this by placing the directory under a different drive/partition with limited space. For volume mounts, disk space is limited by where the volume mount is sourced, and the default named volumes go back to /var/lib/docker.
Single docker container is working good for less number of parellel processes but when we increase the number of parellel processes to 20-30 the process execution get slows. The processes are getting slow but still the docker is utilizing only 30-40% of cpu.
I have tried following things to make docker utilize proper cpu and don't slow down the processes -
I have explicitly allocated the cpu and ram to the docker container.
I have also increased the number of file descriptors, number of process and stack size using ulimit.
even after doing this two thing still the container is not utilizing cpu properly. I am using docker exec to start multiple processes in single running container. Is there any efficient way to use single docker container for executing multiple processes or to make container use 100% of cpu?
The configuration i am using is
Server - aws ec2 t2.2Xlarge ( 8 core, 32 gb ram)
Docker version - 18.09.7
Os- ubuntu 18.04
When you run something on machine it consumes following resources:1. CPU 2. RAM 3. DISK I/O 4. Network Bandwidth. If your container is exhausting any one resource listed above than it is possible that other resources available. So monitor your system matrices to find the root cause.
We can limit container's disk bandwidth when create it by using docker run --device-read-bps. Actually I'm using kubernetes to create container. Want make every container in my node use only 50M/s disk bandwidth .
Is there any way can configure docker daemon like docker run --device-read-bps?
Kubernetes supports CPU and Memory limits, but as far as I know, it does not handle any disk quota or limits at this time.
In PersistentVolumes, you can specify StorageClass, but that only seems to imply slow or fast disk (i.e. HD or SSD..) and that does not have any bandwidth limitation.
So in short I do not think it is possible.
Suppose there is a Docker container created with fixed memory configuration. Can I release some memory from it on the fly? Since we can do this with a simple command lxc-cgroup -n name memory.limit_in_bytes 256M in LXC.
You can't release container memory without either:
Restarting the container
Clearing the host's memory
This is because Docker container's are represented as processes on the host system. Therefore you would need to free the memory associated with the container process. As far as I understand this is difficult especially since processes may rely on shared memory structures etc.
If you really need memory released you can use the following commands to help clear the hosts memory, assuming you are on Linux:
Flush data in memory buffers to disk:
$ sync
Clear OS page cache:
$ echo 1 > /proc/sys/vm/drop_caches
A container on it's own doesn't use much memory. They are very light weight by design. Think of it as a fenced off area on your host, just a bit of tape to divide your container from the rest of the host.
The kernel has to house the cgroups, namespaces, interfaces, devices etc but this is negligible.
Docker itself will also introduce a small amount of overhead for it's management of the container but that is normally negligible (Unless you, say, map a thousand ports into a container).
The main use of memory will be the processes that run in the container. A container won't use 256MB if you set that as the upper limit, that will just be the upper limit before allocations start failing.
If you want to release used memory, either the process you are running in the container needs to have that ability, or you can restart the process using the memory.
My understading, based on the fact that Docker is based on LXC, is that Docker containers share various resources from its host operating system. My concern is with CPU cores. Here is a scenario:
a host linux OS has 8 cores
I have to deploy a set of docker containers on the host OS above.
Some of the docker containers that I need to deploy would be better suited to use 2 cores
a) So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS ?
b) Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core ?
c) How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
Currently, I don't think docker provides this level of granularity. It doesn't specify how many cores it allocates in its lxc.conf files, so you will get all cores for each docker, potentially (or possibly 1, I'm not 100% sure on that).
However, you could tweak the conf file generated for a given container and set something like
cpuset {
cpuset.cpus="0-3";
}
It might be that things changed in the latest (few) versions. Nowadays you can constrain your docker container with parameters for docker run:
The equivalent for the current answer in the new docker version is
docker run ubuntu /bin/echo 'Hello world --cpuset-cpus="0-3"
However, this will limit the docker process to these CPU, but (please correct me if I am wrong) other containers could also request the same set.
A possibly better way would be to use CPU shares.
For more information see https://docs.docker.com/engine/reference/run/
From ORACLE documentation:
To control a container's CPU usage, you can use the
--cpu-period and --cpu-quota options with the docker
create and docker run commands from version 1.7.0 of Docker onward.
The --cpu-quota option specifies the number of microseconds
that a container has access to CPU resources during a
period specified by --cpu-period.
As the default value of --cpu-period is 100000, setting the
value of --cpu-quota to 25000 limits a container to 25% of
the CPU resources. By default, a container can use all available CPU resources,
which corresponds to a --cpu-quota value of -1.
So if I run all of the docker containers on that host, will they consume CPU/cores as needed like if they were being run as normal installed applications on that host OS?
Yes.
CPU
By default, each container’s access to the host machine’s CPU cycles is unlimited. You can set various constraints to limit a given container’s access to the host machine’s CPU cycles.
Will the docker container consume its own process and all of the processing that is contained in it will be stuck to that parent process's CPU core?
Nope.
Docker uses Completely Fair Scheduler for sharing CPU resources among containers. So containers have configurable access to CPU.
How can I specify a docker container to use a number of cores ( 4 for example ). I saw there is a -C flag that can point to a core id, but it appears there is no option to specify the container to pick N cores at random.
It is overconfigurable. There are more cpu options in Docker which you can combine.
--cpus= Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
--cpuset-cpus Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
And more...