How to specify the number of CPU a container will use? - docker

I am on working on Windows Server 2019 and trying to run a docker container of CentOS on it. I am running the following:
PS C:\Windows\system32> docker run -dit --name=testing23 --cpu-shares=12 raycentos:1.0
6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70
PS C:\Windows\system32> docker exec -it 6a3ffb86c1d9509a9d80f0de54fc6acf5731ca645ee74e6aabe41d7153b3af70 bash
(app-root) bash-4.2# nproc
2
It still specifies only 2, and not 32. How can we assign more CPUs to the container?

refer to this topic for more details https://docs.docker.com/config/containers/resource_constraints/#cpu
you have to add the values with proper flags
try :
--cpus=<value> for maximum CPU resources a container can use
--cpuset-cpus = 12

By default, all can be used, or you can limit it per container using the --cpuset-cpus parameter.
docker run --cpuset-cpus="0-2" myapplication:latest
That would restrict the container to 3 CPU's (0, 1, and 2). See the docker run docs for more details.
The chosen way to limit CPU usage of containers is with a fractional limit on CPUs:
docker run --cpus 2.5 myapplication:latest

Related

How can you get the docker image size at runtime?

How can you analyze the size of a Docker container at runtime?
I have a docker image which has 1.5GB.
$ docker images
my-image latest 36ccda75244c 3 weeks ago 1.49GB
Much more space is required on the hard drive while the container is running. How can I get this storage space displayed? With dive or docker inspect etc. you only get the information of the packed image.
You can use docker stats for that, here's an example:
$ docker run --rm -d nginx
e3c2fd
$ docker stats --all --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" --no-stream e3c2fdc
CONTAINER CPU % MEM USAGE / LIMIT
e3c2fdc 0.00% 2.715MiB / 9.489GiB

Why is OpenJDK Docker Container ignoring Memory Limits in Kubernetes?

I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).
If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:
1. Run Wildfly in Docker with a memory limit of 300M:
$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
verify Memory usage:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
As expected the container will NOT exceed the memory limit of 300M.
2. Run Wildfly in Kubernetes with a memory limit of 300M:
Now I start the same container within kubernetes.
$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
verify memory usage:
$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
The memory limit of 300M is totally ignored and exceeded immediately.
Why does this happen? Both tests can be performed on the same machine.
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.
I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.
The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.

How to see memory allotment for Docker Engine?

Setting up a docker instance of Elasticsearch Cluster.
In the instructions it says
Make sure Docker Engine is allotted at least 4GiB of memory
I am ssh'ing to the host, not using docker desktop.
How can I see the resource allotments from the command line?
reference URL
https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html
I had same problem, with Docker Desktop on Windows 10 while running Linux containers on WSL2.
I found this issue: https://github.com/elastic/elasticsearch-docker/issues/92 and tried to apply similar logic to the solution of there.
I entered the WSL instance's terminal by
wsl -d docker-desktop command.
Later I run sysctl -w vm.max_map_count=262144 command to set 'allotted memory'.
After these steps I could run elasticsearch's docker compose example.
I'd like to go about it by just using one command.
docker stats -all
This will give a output such as following
$ docker stats -all
CONTAINER ID NAME CPU% MEM USAGE/LIMIT MEM% NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B 2
To modify the limits :
when you're making your docker-compose.yml include the following at the end of your file. (if you'd like to set up a 4 GiB limit)
resources:
limits:
memory: 4048m
reservations:
memory: 4048m

Docker --cpus use cpu cores or processors to limit usage?

Docker have --cpus to constrain CPU usage for container.
According to docs, it will
Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
However, I run machine:
# cat /proc/cpuinfo | grep "cpu cores" | tail -n 1
8
# cat /proc/cpuinfo | grep "processor" | wc -l
16
Does it make sense to set --cpus=8 if I want to set 50% limit to container? Or it will be 100%?
I don't see clear answer neither in Docker documentation nor in cgroups manual.
I saw detailed explanation of differences between physical cpu and virtual cpu and cores here, but it don't clarify what I should use for my limits with Docker.
By default, the process in the container is run with no CPU restrictions and can use all available processor time, competing with other processes running on the Linux host. When setting --cpus this configures the cgroup settings to limit processes inside that container to only use that many shares of CPU time. This is managed by the kernel, but the underlying hardware is still visible in /proc/cpuinfo. Instead you should look at the cgroup settings:
$ docker run -it --rm --cpus 0.75 busybox sh
/proc # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
75000
/proc # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
Contrast with an unlimited container:
$ docker run -it --rm busybox sh
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
-1
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
/ #

Setting absolute limits on CPU for Docker containers

I'm trying to set absolute limits on Docker container CPU usage. The CPU shares concept (docker run -c <shares>) is relative, but I would like to say something like "let this container use at most 20ms of CPU time every 100ms. The closest answer I can find is a hint from the mailing list on using cpu.cfs_quota_us and cpu.cfs_period_us. How does one use these settings when using docker run?
I don't have a strict requirement for either LXC-backed Docker (e.g. pre0.9) or later versions, just need to see an example of these settings being used--any links to relevant documentation or helpful blogs are very welcome too. I am currently using Ubuntu 12.04, and under /sys/fs/cgroup/cpu/docker I see these options:
$ ls /sys/fs/cgroup/cpu/docker
cgroup.clone_children cpu.cfs_quota_us cpu.stat
cgroup.event_control cpu.rt_period_us notify_on_release
cgroup.procs cpu.rt_runtime_us tasks
cpu.cfs_period_us cpu.shares
I believe I've gotten this working. I had to restart my Docker daemon with --exec-driver=lxc as I
could not find a way to pass cgroup arguments to libcontainer. This approach worked for me:
# Run with absolute limit
sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=50000" -it ubuntu bash
The necessary CFS docs on bandwidth limiting are here.
I briefly confirmed with sysbench that this does seem to introduce an absolute limit, as shown below:
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=10000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 90.5450s
$ sudo docker run --lxc-conf="lxc.cgroup.cpu.cfs_quota_us=20000" --lxc-conf="lxc.cgroup.cpu.cfs_period_us=50000" -it ubuntu bash
root#302e651c0686:/# sysbench --test=cpu --num-threads=1 run
<snip>
total time: 45.0423s

Resources