How can I make Docker trigger higher CPU frequencies - docker

OK, so my title may not actually be linked to a possible solution, however this is my problem.
I am running a Python 3 Jupyter notebook inside a docker container in from my windows 10 kaby-lake (2 physical cores, 4 virtual cores) laptop.
I noticed while doing heavy computing from there, my CPU usage seen in the task monitor is very low (~15%).
When going on the details for each process, the VBoxHeadless.exe actually uses 24% of the processor, which matches docker stats command which yields 97-100% CPU usage, and therefore makes sense from a single-core operation point of view.
My actual issue is that even though on thread is filled in terms of CPU time, windows (I guess) does not decide that it may actually be useful to speed up the CPU, and therefore it runs at 1.7GHz (with other apps in high performance mode, I usually hit the max 3.5GHz that the computer is capable of).
Therefore, how can I induce the higher clock speeds (nominal 2.7GHz or max 3.5GHZ) (considering that they would probably double my single threaded speed) from docker itself or inside windows 10?

You need to configure the docker machine running docker. If you haven't created a custom one, the default docker machine named 'default' will only have access to one cpu.
You can check all the configuration for this docker-machine by running:
docker-machine inspect default
You need to purge this default machine and recreate it:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" --virtualbox-cpu-count "2" --virtualbox-memory "2048" default
You can check all the avaible configuration options for the machine by running:
docker-machine create --help

Defining CPU Shares can help you but not exactly.
CPU limits are based on shares as these shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
e.g. The --cpu-shares parameter defines a share between 0-768. If a container defines a share of 768, while another defines a share of 256, the first container will have 50% share while the other one having 25% of the available share total.
Below the first container will be allowed to have 75% of the share. The second container will be limited to 25%.
docker run -d --name p1 --cpuset-cpus 0 --cpu-shares 768 image_name
docker run -d --name p2 --cpuset-cpus 0 --cpu-shares 256 image_name
sleep 5
docker stats --no-stream
docker rm -f p1 p2
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.

Related

Increasing the memory allocation to docker daemon (dockerd) on Linux [duplicate]

I've seen that on Windows and Mac it's very easy to change the RAM containers are given - you just go into the GUI. But how do you do this on Linux, where it's a CLI instead of a GUI?
The Docker docs mention an -m flag, but this flag doesn't give any response (just prints the entirety of the help output again) so I don't know whether it worked. It also seems specific to containers, whereas I'd like to change the global default.
Lastly, is there a way to check the current default RAM, so I can make sure whatever I do in the end actually worked?
On native Linux, Docker can use all available host memory. It uses a lightweight kernel-based isolation mechanism that generally shares resources like CPU cores and memory (and on modern installations, disk space) using the standard kernel mechanism. There isn't a control or setting to limit or increase this.
On other platforms Docker runs a hidden Linux VM to be able to run a Linux kernel to use these isolation mechanisms, and the Docker Desktop memory control affects the memory allocation for that VM.
This is how I "check" the Docker container memory:
Open the linux command shell and -
Step 1: Check what containers are running.
docker ps
Step 2: Note down the 'CONTAINER ID' of the container you want to check and issue the following command:
docker container stats <containerID>
eg:
docker container stats c981
This will give an output like:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
c981c9482284 registry 0.00% 4.219MiB / 1.944GiB 0.21% 9.66kB / 0B 0B / 0B 14
'MEM USAGE / LIMIT' column will give you the actual memory usage and default memory allocated.
Note : press ctrl+c to come out of the view and back to command prompt.

Docker CPU and memory too low

I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.

Docker container CPU allocation

I have created a container:
docker run -c=20 -i -t ubuntu:latest /bin/bash
I tried to use -c flag to control CPU usage and maximize it in 50 %. When I am running md5sum /dev/urandom inside container, it use up 100 % CPU in host machine.
The -c flag for docker run command modifies the container’s CPU share weighting relative to the weighting of all other running containers.
It does not restrict the container's use of CPU from the host machine.
You can use the --cpu-quota flag to limit CPU usage, for example:
$ docker run -ti --cpu-quota=50000 ubuntu:latest /bin/bash
The --cpu-quota is usually used in conjunction with --cpu-period. Please see more details on the Docker run reference document:
https://docs.docker.com/reference/run/#runtime-constraints-on-resources
It seems that you are running a single container, so this is the expected result.
You might find this blog post helpful.
Every new container will have 1024 shares of CPU by default. This
value does not mean anything, when speaking of it alone. But if we
start two containers and both will use 100% CPU, the CPU time will be
divided equally between the two containers because they both have the
same CPU shares (for the sake of simplicity I assume that there are no
other processes running).
Take a look here, this is apparently what you were looking for:
https://docs.docker.com/engine/reference/run/#cpu-period-constraint
The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use --cpu-period to set the period of CPUs to limit the container’s CPU usage. And usually --cpu-period should work with --cpu-quota.
Examples:
$ docker run -it --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash
If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms.
period and quota definition:
Within
each given "period" (microseconds), a group is allowed to consume only up to
"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
group exceeds this limit (for that period), the tasks belonging to its
hierarchy will be throttled and are not allowed to run again until the next
period.

Docker CPU percentage

Is there any way that I can get the cpu percentage inside docker container and not outside of it?! docker stats DOCKER_ID shows the percentage which is exactly what I need but I need it as variable. I need to get cpu percentage inside the container itself and do some operation with it.
I have looked into different stuff such as cgroup and docker rest API, but they do not provide cpu percentage. If there is a way to get the cpu percentage inside the container and not outside of it will be perfect. I found one solution provided by someone in below link, which is still outside the container by the rest api, however I did not really get it how to calculate the percentage.
Get Docker Container CPU Usage as Percentage
You can install Google cAdvisor with Axibase Time-Series Database storage driver. It will collect and store CPU utilization measured both in core units as well as in percentages.
Screenshots with examples of how CPU is reported are located at the bottom of the page: https://axibase.com/products/axibase-time-series-database/writing-data/docker-cadvisor/
In a centralized configuration, the ATSD container itself can ingest metrics from multiple cAdvisor instances installed on multiple docker hosts.
EDIT 1: One liner to compute total CPU usage of all processes running inside the container. Adjust -d parameter to change the interval between samples to smooth spikes out:
top -b -d 5 -n 2 | awk '$1 == "PID" {block_num++; next} block_num == 2 {sum += $9;} END {print sum}'
I have used ctop which gives a more graphical way than docker_stats
But I found that it was showing CPU percentage way higher than what Top was showing for the system. Basically it is showing relative to the root process. Docker containers run as child process
To illustrate with an example
First find the root process under which all the containers run
docker-containerd-shim -
..the Docker architecture is broken into four components: Docker engine, containerd, containerd-shm and runC. The binaries are respectively called docker, docker-containerd, docker-containerd-shim, and docker-runc.
- https://hackernoon.com/docker-containerd-standalone-runtimes-heres-what-you-should-know-b834ef155426
root 1843 1918 0 Aug31 ? 00:00:00 docker-containerd-shim 611bd9... /var/run/docker/libcontainerd/611bd92.... docker-runc
You can see all the containers that are running using the command
pstree -p 1918
Now say that we are interested in seeing the CPU consumption of fluentdb.
Easy way to get the child pid of this is
pstree -p 1918 |grep fluentd
Which gives 21670
Now you can run top -p 21670 to see the CPU share of this child process also top -p 1918 to see the overall CPU of the parent process.
With cadvisor collecting to Promethus and view in Grafana, this was the closest and most accurate representation of the actual CPU percentage used by the container; in relation to the host machine. This diagram illustrates this.
cTop and docker stats give 23% as the CPU percentage. Actual CPU percentage of the docker parent process is around 2% and cAdvisor output from Grafana shows the most 'accurate' value of the container CPU percentage related to host.

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Resources