I can run docker run with --blkio-weight --blkio-weight-device --device-read-bps --device-read-iops --device-write-bps --device-write-iops.
The docker update command can dynamically updates container configuration.
However, I can just update the --blkio-weight arg.
Why does docker update not support other args like --blkio-weight-device?
Is there a way to do that, like modifying cgroups args manually?
update:
I tried to edit cgroups args in /sys/fs/cgroup/blkio/.../blkio.throttle.write_bps_device manually and it works. It seems that the blkio args can be modified dynamically.
Because this is the only flag that docker update command support.
--blkio-weight Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
The docker update command dynamically updates container configuration.
You can use this command to prevent containers from consuming too many
resources from their Docker host. With a single command, you can place
limits on a single container or on many. To specify more than one
container, provide space-separated list of container names or IDs.
These are the supported flag.
--blkio-weight Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
--cpu-period Limit CPU CFS (Completely Fair Scheduler) period
--cpu-quota Limit CPU CFS (Completely Fair Scheduler) quota
--cpu-rt-period API 1.25+
Limit the CPU real-time period in microseconds
--cpu-rt-runtime API 1.25+
Limit the CPU real-time runtime in microseconds
--cpu-shares , -c CPU shares (relative weight)
--cpus API 1.29+
Number of CPUs
--cpuset-cpus CPUs in which to allow execution (0-3, 0,1)
--cpuset-mems MEMs in which to allow execution (0-3, 0,1)
--kernel-memory Kernel memory limit
--memory , -m Memory limit
--memory-reservation Memory soft limit
--memory-swap Swap limit equal to memory plus swap: ‘-1’ to enable unlimited swap
--pids-limit API 1.40+
Tune container pids limit (set -1 for unlimited)
--restart Restart policy to apply when a container exits
docker-commandline-update
Related
I want multiple running containers to have mutually exclusive resources with each other. For example, when there are CPU cores from id0 to id63, if 32 CPU cores are allocated to each container, the CPU cores assigned to them are mutually exclusive. In addition, when the host has 16GB of RAM, we want to allocate 8GB to each container so that one container does not affect the memory usage of another container.
Is there good way to do this?
I think all you need is to just limit container resources. This way you can ensure that no container uses more than X cores and/or Y RAM. To limit CPU usage to 1 core add --cpus=1.0 to your docker run command. To limit RAM to 8 gigabytes add -m=8g. Putting it altogether:
docker run --rm --cpus=1 -m=8g debian:buster cat /dev/stdout
And if your look at docker stats you will see that memory is limited (no indication for CPU though):
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8d9a33b00950 funny_shirley 0.00% 1MiB / 8GiB 0.10% 6.7kB / 0B 0B / 0B 1
Read more in the docs.
Here's the scenario:
On a Debian GNU/Linux 9 (stretch) VM I have two containers running. The day before yesterday I got a warning from the monitoring that the memory usage is relatively high. After looking at the VM it could be determined that not the containers but Docker daemon needs them. htop
After a restart of the service I noticed a new increase of memory demand after two days. See graphic.
RAM + Swap overview
Is there a known memory leak for this version?
Docker version
Memory development (container) after 2 days:
Container 1 is unchanged
Container 2 increased from 21.02MiB to 55MiB
Memory development (VM) after 2 days:
The MEM increased on the machine from 273M (after reboot) to 501M
dockerd
- after restart 1.3% MEM%
- 2 days later 6.0% of MEM%
Monitor your containers to see if their memory usage changes over time:
> docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
623104d00e43 hq 0.09% 81.16MiB / 15.55GiB 0.51% 6.05kB / 0B 25.5MB / 90.1kB 3
We saw a similar issue and it seems to have been related to the gcplogs logging driver. We saw the problem on docker 19.03.6 and 19.03.9 (the most up-to-date that we can easily use).
Switching back to using a log forwarding container (e.g. logspout) resolved the issue for us.
My predecessor created 2 docker containers and linked them together using the --link option.
Now I have 1 live container that I want to continue using and the other is of no use. However, when I try to start one of them, I get
[keith#docker ~]$ sudo docker start ABC
Error response from daemon: Cannot link to a non running container: /XYZ AS
/ABC/XYZ
Error: failed to start containers: ABC
No help from here https://forums.docker.com/t/how-can-i-remove-the-link-between-a-deleted-container-and-a-live-container/40431
Thanks in advance!
Docker has an update command which can be used to update settings of an existing container
$ docker update --help
Usage: docker update [OPTIONS] CONTAINER [CONTAINER...]
Update configuration of one or more containers
Options:
--blkio-weight uint16 Block IO (relative
weight), between 10 and
1000, or 0 to disable
(default 0)
--cpu-period int Limit CPU CFS (Completely
Fair Scheduler) period
--cpu-quota int Limit CPU CFS (Completely
Fair Scheduler) quota
--cpu-rt-period int Limit the CPU real-time
period in microseconds
--cpu-rt-runtime int Limit the CPU real-time
runtime in microseconds
-c, --cpu-shares int CPU shares (relative weight)
--cpus decimal Number of CPUs
--cpuset-cpus string CPUs in which to allow
execution (0-3, 0,1)
--cpuset-mems string MEMs in which to allow
execution (0-3, 0,1)
--kernel-memory bytes Kernel memory limit
-m, --memory bytes Memory limit
--memory-reservation bytes Memory soft limit
--memory-swap bytes Swap limit equal to
memory plus swap: '-1' to
enable unlimited swap
--restart string Restart policy to apply
when a container exits
But you can't add or remove a link as you can see. You need to run a new container again. So in short what you are looking for is not possible
I'm trying to limit my container so that it doesn't take up all the RAM on the host. From the Docker docs I understand that --memory limits the RAM and --memory-swap limits (RAM+swap). From the docker-compose docs it looks like the terms for those are mem_limit and memswap_limit, so I've constructed the following docker-compose file:
> cat docker-compose.yml
version: "2"
services:
stress:
image: progrium/stress
command: '-m 1 --vm-bytes 15G --vm-hang 0 --timeout 10s'
mem_limit: 1g
memswap_limit: 2g
The progrium/stress image just runs stress, which in this case spawns a single thread which requests 15GB RAM and holds on to it for 10 seconds.
I'd expect this to crash, since 15>2. (It does crash if I ask for more RAM than the host has.)
The kernel has cgroups enabled, and docker stats shows that the limit is being recognised:
> docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
7624a9605c70 0.00% 1024MiB / 1GiB 99.99% 396B / 0B 172kB / 0B 2
So what's going on? How do I actually limit the container?
Update:
Watching free, it looks like the RAM usage is effectively limited (only 1GB of RAM is used) but the swap is not: the container will gradually increase swap usage until it's eaten though all of the swap and stress crashes (it takes about 20secs to get through 5GB of swap on my machine).
Update 2:
Setting mem_swappiness: 0 causes an immediate crash when requesting more memory than mem_limit, regardless of memswap_limit.
Running docker info shows WARNING: No swap limit support
According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities this is disabled by default ("Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation.") You can enable it by editing the /etc/default/grub file:
Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
then update GRUB with update-grub and reboot.
I would like to run two containers with the following resource allocation:
Container "C1": reserved cpu1, shared cpu2 with 20 cpu-shares
Container "C2": reserved cpu3, shared cpu2 with 80 cpu-shares
If I run the two containers in this way:
docker run -d --name='C1' --cpu-shares=20 --cpuset-cpus="1,2" progrium/stress --cpu 2
docker run -d --name='C2' --cpu-shares=80 --cpuset-cpus="2,3" progrium/stress --cpu 2
I got that C1 takes 100% of cpu1 as expected but 50% of cpu2 (instead of 20%), C2 takes 100% of cpu3 as expected and 50% of cpu2 (instead of 80%).
It looks like the --cpu-shares option is ignored.
Is there a way to obtain the behavior I'm looking for?
docker run mentions that parameter as:
--cpu-shares=0 CPU shares (relative weight)
And contrib/completion/zsh/_docker#L452 includes:
"($help)--cpu-shares=[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
So those values are not %-based.
The OP mentions --cpu-shares=20/80 works with the following Cpuset constraints:
docker run -ti --cpuset-cpus="0,1" C1 # instead of 1,2
docker run -ti --cpuset-cpus="3,4" C2 # instead of 2,3
(those values are validated/checked only since docker 1.9.1 with PR 16159)
Note: there is also CPU quota constraint:
The --cpu-quota flag limits the container’s CPU usage. The default 0 value allows the container to take 100% of a CPU resource (1 CPU).