I am learning docker and read that the docker using cgroup to limit the resource usage, but I did not find out how the cgroup do the resource control. how does the cgroup control the resource usage? what algorithm does the cgroup use? I have tried to searching from internet but no one talk about it.
Related
I have a docker container running multiple processes started via docker-compose with
mem_limit: 200m
memswap_limit: 200m
When the memory limit is reached docker kills a random process, but I want the entire container (with all processes) to be terminated.
Is there a configuration option for that?
Edited / Additional information:
The container runs a multi-threaded python application with each thread running in its own process.
If the container is killed by docker from hitting the cgroup limit, you can inspect the killed container and should see OOMKilled set to true:
$ docker container inspect --format '{{.State.OOMKilled}}' $container_name
false
If it's false like above, then most likely the kernel killed a process based on the OS running out of memory rather than the cgroup limit being reached. You should see that in the kernel logs (/var/log/messages or /var/log/syslog usually). The lines would look like:
[11686.043641] Out of memory: Kill process 2603 (flasherav) score 761 or sacrifice child
[11686.043647] Killed process 2603 (flasherav) total-vm:1498536kB, anon-rss:721784kB, file-rss:4228kB
If the OS is killing processes, that's a sign you either need to reduce the workload on the host, tighten the cgroup limit on the container, or increase the memory available to the host (larger VM or adding RAM to the machine). If you reach the cgroup limit set on the container, docker should terminate the entire container.
I easily managed to do this on the desktop version of Docker via preferences, but how can I do this using console on a remote linux server?
The limits you are configuring in the Docker Desktop UI are on the embedded Linux VM. All containers run within that VM, giving you an upper limit on the sum of all containers. To replicate this on a remote Linux server, you would set the physical hardware or VM constraints to match your limit.
For individual containers, you can specify the following:
--cpus to set the CPU shares allocated to the cgroup. This can be something like 2.5 to allocate up to 2.5 CPU threads to the container. Containers attempting to use more CPU will be throttled.
--memory or -m to set the memory limit in bytes. This is applied to the cgroup the container runs within. Containers attempting to exceed this limit will be killed.
Disk space for containers and images is controlled by the disk space available to /var/lib/docker for the default overlay2 graph driver. You can limit this by placing the directory under a different drive/partition with limited space. For volume mounts, disk space is limited by where the volume mount is sourced, and the default named volumes go back to /var/lib/docker.
I've been looking all over the web and I can't seem to find a solution to this problem. The storage driver for my docker instance is overlay2 and I need to increase the default storage space for a new container.
The only thing I've been able to find is to use the --storage-opt flag, but per their documentation...
For the overlay2 storage driver, the size option is only available if
the backing fs is xfs and mounted with the pquota mount option
My current backing filesystem is extfs, so this option does not work. Anyone know how I would
go about doing this on my Mac?
The overlay2 driver does not generally apply any space limitations to new containers. A container may use as much storage as it requires up to the limits of the underlying filesystem (typically, /var/lib/docker).
As you note in the documentation, the only mechanism by which the overlay2 driver can enforce container storage limits is if you are (a) using XFS and (b) have enabled pquota (project quota) support.
If you're on a Mac, your container storage is limited by the size of the virtual disk attached to the Linux VM on which Docker is actually running. If you're running out of space, you can probably resize the disk to provide additional container storage space.
Does docker container get the same band-width as the host container? Or do we need to configure min and(or) max. I 've noticed that we need to override default RAM(which is 2 GB) and Swap space configuration if we need to run CPU intensive jobs.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
Memory and CPU are controlled using cgroups by docker. If you do not configure these, they are unrestricted and can use all of the memory and CPU on the docker host. If you run in a VM, which includes all Docker for Desktop installs, then you will be limited to that VM's resources.
Disk space is usually limited to the disk space available in /var/lib/docker. For that reason, many make this a different mount. If you use devicemapper for docker's graph driver (this has been largely deprecated), created preallocated blocks of disk space, and you can control that block size. You can restrict containers by running them with read-only root filesystems, and mounting volumes into the container that have a limited disk space. I've seen this done with loopback device mounts, but it requires some configuration outside of docker to setup the loopback device. With a VM, you will again be limited by the disk space allocated to that VM.
Network bandwidth is by default unlimited. I have seen an interesting project called docker-tc which monitors containers for their labels and updates bandwidth settings for a container using tc (traffic control).
Does docker container get the same band-width as the host container?
Yes. There is no limit imposed on network utilization. You could maybe impose limits using a bridge network.
Also do we need to configure the disk-space ? Or does it by default get as much space as the actual hard disk.
It depends on which storage driver you're using because each has its own options. For example, devicemapper uses 10G by default but can be configured to use more. The recommended driver now is overlay2. To configure start docker with overlay2.size.
This depends some on what your host system is and how old it is.
In all cases network bandwidth isn't explicitly limited or allocated between the host and containers; a container can do as much network I/O as it wants up to the host's limitations.
On current native Linux there isn't a desktop application and docker info will say something like Storage driver: overlay2 (overlay and aufs are good here too). There are no special limitations on memory, CPU, or disk usage; in all cases a container can use up to the full physical host resources, unless limited with a docker run option.
On older native Linux there isn't a desktop application and docker info says Storage driver: devicemapper. (Consider upgrading your host!) All containers and images are stored in a separate filesystem and the size of that is limited (it is included in the docker info output); named volumes and host bind mounts live outside this space. Again, memory and CPU are not intrinsically limited.
Docker Toolbox and Docker for Mac both use virtual machines to provide a Linux kernel to non-Linux hosts. If you see a "memory" slider you are probably using a solution like this. Disk use for containers, images, and named volumes is limited to the VM capacity, along with memory and CPU. Host bind mounts generally get passed through to the host system.
CPU usage on our metrics box is at 100% intermittently causing:
'Internal server error' when rendering Grafana dashboards
The only application running on our machine is Docker with 3 subcontainers
cadvisor
graphite
grafana
Machine spec
OS Version Ubuntu 16.04 LTS
Release 16.04 (xenial)
Kernel Version 4.4.0-103-generic
Docker Version 17.09.0-ce
CPU 4 cores
Memory 4096 MB
Memory reservation is unlimited
Network adapter mgnt
Storage
Driver overlay2
Backing Filesystem extfs
Supports d_type true
Native Overlay Diff true
Memory swap limit is 2.00GB
Here is a snippet from cAdvisor:
The kworker and ksoftirqd processes change status constently from 'D' to 'R' to 'S'
Are the machine specs correct for this setup?
How can I get the CPU usage to 'normal' levels?
By default, a Docker container (just like any process on host) has access to all memory and cpu resources of the machine.
Docker provides options to limit the container resource consumption. You check the following doc dedicated to Limiting a container's resources.