Mixing cpu-shares and cpuset-cpus in Docker - docker

I would like to run two containers with the following resource allocation:
Container "C1": reserved cpu1, shared cpu2 with 20 cpu-shares
Container "C2": reserved cpu3, shared cpu2 with 80 cpu-shares
If I run the two containers in this way:
docker run -d --name='C1' --cpu-shares=20 --cpuset-cpus="1,2" progrium/stress --cpu 2
docker run -d --name='C2' --cpu-shares=80 --cpuset-cpus="2,3" progrium/stress --cpu 2
I got that C1 takes 100% of cpu1 as expected but 50% of cpu2 (instead of 20%), C2 takes 100% of cpu3 as expected and 50% of cpu2 (instead of 80%).
It looks like the --cpu-shares option is ignored.
Is there a way to obtain the behavior I'm looking for?

docker run mentions that parameter as:
--cpu-shares=0 CPU shares (relative weight)
And contrib/completion/zsh/_docker#L452 includes:
"($help)--cpu-shares=[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
So those values are not %-based.
The OP mentions --cpu-shares=20/80 works with the following Cpuset constraints:
docker run -ti --cpuset-cpus="0,1" C1 # instead of 1,2
docker run -ti --cpuset-cpus="3,4" C2 # instead of 2,3
(those values are validated/checked only since docker 1.9.1 with PR 16159)
Note: there is also CPU quota constraint:
The --cpu-quota flag limits the container’s CPU usage. The default 0 value allows the container to take 100% of a CPU resource (1 CPU).

Related

What is the real memory available in Docker container?

I've run mongodb service via docker-compose like this:
version: '2'
services:
mongo:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mem_limit: 4GB
If I run docker stats I can see 4 GB allocated:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
cf3ccbd17464 michal_mongo_1 0.64% 165.2MiB / 4GiB 4.03% 10.9kB / 4.35kB 0B / 483kB 35
But I run this command I get RAM from my laptop which is 32 GB:
~$ docker exec michal_mongo_1 free -g
total used free shared buff/cache available
Mem: 31 4 17 0 9 24
Swap: 1 0 1
How does mem_limit affect the memory size then?
free (and other utilities like top) will not report correct numbers inside a memory-constraint container because it gathers its information from /proc/meminfo which is not namespaced.
If you want the actual limit, you must use the entries populated by cgroup pseudo-filesystem under /sys/fs/cgroup.
For example:
docker run --rm -i --memory=128m busybox cat /sys/fs/cgroup/memory/memory.limit_in_bytes
The real-time usage information is available under /sys/fs/cgroup/memory/memory.stat.
You will probably need the resident-set-size (rss), for example (inside the container):
grep -E -e '^rss\s+' /sys/fs/cgroup/memory/memory.stat
For a more in-depth explanation, see also this article

Docker container update --memory didn't work as expected

Good morning all,
In the process of trying to train myself in Docker, I'm having trouble.
I created a docker container from a wordpress image, via docker compose.
[root#vps672971 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57bb123aa365 wordpress:latest "docker-entrypoint.s…" 16 hours ago Up 2 0.0.0.0:8001->80/tcp royal-by-jds-wordpress-container
I would like to allocate more memory to this container, however after the execution of the following command, the information returned by docker stats are not correct.
docker container update --memory 3GB --memory-swap 4GB royal-by-jds-wordpress-container
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
57bb123aa365 royal-by-jds-wordpress-container 0.01% 9.895MiB / 1.896GiB 0.51% 2.68kB / 0B 0B / 0B 6
I also tried to request API engine to retrieve information about my container, but the limitation displayed is not correct either.
curl --unix-socket /var/run/docker.sock http:/v1.21/containers/royal-by-jds-wordpress-container/stats
[...]
"memory_stats":{
"usage":12943360,
"max_usage":12955648,
"stats":{},
"limit":2035564544
},
[...]
It seems that the modification of the memory allocated to the container didn't work.
Anyone have an idea?
Thank you in advance.
Maxence

Make docker build --memory-swap=20g use the available swap space?

I have run free -h and see that I have 29G of swap space.
total used free shared buff/cache available
Mem: 15G 6.9G 8.8G 17M 223M 8.9G
Swap: 29G 2.0M 29G
I have also enabled 100 swappiness.
$ sudo sysctl vm.swappiness=100
vm.swappiness = 100
$ cat /proc/sys/vm/swappiness
100
However, docker build --memory-swap=20g does not appear to use the swap space. This is the output of htop throughout the docker build.
1 [|||||||||||||||| 18.7%]
2 [||||||| 7.3%]
3 [|||||||||||||||||||||| 26.5%]
4 [||||||||||||||| 18.0%]
Mem[||||||||||||||||||||||||||||||||||| 6.47G/15.9G]
Swp[| 2.00M/29.6G]
This is the docker build command:
docker build --build-arg NODE_OPTIONS="--max-old-space-size=325" \
--memory=600m --memory-swap=20g \
--cpu-period=100000 --cpu-quota=50000 \
--no-cache --tag farm_app_image:latest --file Dockerfile .
The docker build appears to be running out of RAM, because the build's internal process (NodeJS) runs out of heap space and crashes. Also, immediately before the crash the memory is maxed:
shaun#DESKTOP-5T629JB:/mnt/c/Users/bigfo$ docker ps -q | xargs docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
66bdf8efb492 charming_maxwell 51.72% 562.2MiB / 600MiB 93.70% 46.8MB / 1.53MB 277MB / 230MB 94
Why is it running out of RAM without using the swap space? How can we make it use the available swap space?
May be you should try to run it with --privileged flag.
docker run -ti --privileged yourimage
But make sure that you know what you are doing.
You should also read docker-tips-privilaged-flag

How to remove legacy link between docker containers

My predecessor created 2 docker containers and linked them together using the --link option.
Now I have 1 live container that I want to continue using and the other is of no use. However, when I try to start one of them, I get
[keith#docker ~]$ sudo docker start ABC
Error response from daemon: Cannot link to a non running container: /XYZ AS
/ABC/XYZ
Error: failed to start containers: ABC
No help from here https://forums.docker.com/t/how-can-i-remove-the-link-between-a-deleted-container-and-a-live-container/40431
Thanks in advance!
Docker has an update command which can be used to update settings of an existing container
$ docker update --help
Usage: docker update [OPTIONS] CONTAINER [CONTAINER...]
Update configuration of one or more containers
Options:
--blkio-weight uint16 Block IO (relative
weight), between 10 and
1000, or 0 to disable
(default 0)
--cpu-period int Limit CPU CFS (Completely
Fair Scheduler) period
--cpu-quota int Limit CPU CFS (Completely
Fair Scheduler) quota
--cpu-rt-period int Limit the CPU real-time
period in microseconds
--cpu-rt-runtime int Limit the CPU real-time
runtime in microseconds
-c, --cpu-shares int CPU shares (relative weight)
--cpus decimal Number of CPUs
--cpuset-cpus string CPUs in which to allow
execution (0-3, 0,1)
--cpuset-mems string MEMs in which to allow
execution (0-3, 0,1)
--kernel-memory bytes Kernel memory limit
-m, --memory bytes Memory limit
--memory-reservation bytes Memory soft limit
--memory-swap bytes Swap limit equal to
memory plus swap: '-1' to
enable unlimited swap
--restart string Restart policy to apply
when a container exits
But you can't add or remove a link as you can see. You need to run a new container again. So in short what you are looking for is not possible

How to check the number of cores used by docker container?

I have been working with Docker for a while now, I have installed docker and launched a container using
docker run -it --cpuset-cpus=0 ubuntu
When I log into the docker console and run
grep processor /proc/cpuinfo | wc -l
It shows 3 which are the number of cores I have on my host machine.
Any idea on how to restrict the resources to the container and how to verify the restrictions??
The issue has been already raised in #20770. The file /sys/fs/cgroup/cpuset/cpuset.cpus reflects the correct output.
The cpuset-cpus is taking effect however is not being reflected in /proc/cpuinfo
docker inspect <container_name>
will give the details of the container launched u have to check for "CpusetCpus" in there and then u will find the details.
Containers aren't complete virtual machines. Some kernel resources will still appear as they do on the host.
In this case, --cpuset-cpus=0 modifies the resources the container cgroup has access to which is available in /sys/fs/cgroup/cpuset/cpuset.cpus. Not what the VM and container have in /proc/cpuinfo.
One way to verify is to run the stress-ng tool in a container:
Using 1 cpu will be pinned at 1 core (1 / 3 cores in use, 100% or 33% depending on what tool you use):
docker run --cpuset-cpus=0 deployable/stress -c 3
This will use 2 cores (2 / 3 cores, 200%/66%):
docker run --cpuset-cpus=0,2 deployable/stress -c 3
This will use 3 ( 3 / 3 cores, 300%/100%):
docker run deployable/stress -c 3
Memory limits are another area that don't appear in kernel stats
$ docker run -m 64M busybox free -m
total used free shared buffers cached
Mem: 3443 2500 943 173 261 1858
-/+ buffers/cache: 379 3063
Swap: 1023 0 1023
yamaneks answer includes the github issue.
it should be in double quotes --cpuset-cpus="", --cpuset-cpus="0" means it make use of cpu0.

Resources