There is a way to pass --cpus-cpuset and --cpuset-mems to docker swarm - docker

I was looking through the documentation and it seems than it is not possible to isolate processes in Docker swarm to a specific core, like when you use numactl or cpuset--cpus. In docker run you do it like this (16 cpu machine, use 8 cpus on second socket from 8-15):
/usr/bin/docker run --detach --name myproc --cpus 8 --cpuset-cpus 8-15 --cpuset-mems 1 -- privateregistry:5000/myimage:v1 -c '/bin/myverycpuintensiveprocess.sh'
And I can confirm the processes do not jump from core to core but stay pinned on CPUS 8-15. Also they will use memory from socket 1 as well.
From the 'create service' documentation I see than the closest you have is '--reserve-cpu' and --reserve-memory' but that's only to control container placement.
Is this level of control banned from Docker Swarm? I was also looking at K8s and it seems to have the same limitations.
Thanks,

It is not supported at the moment, it is a issue on GitHub, people should vote for this feature.

Related

How can I make Docker trigger higher CPU frequencies

OK, so my title may not actually be linked to a possible solution, however this is my problem.
I am running a Python 3 Jupyter notebook inside a docker container in from my windows 10 kaby-lake (2 physical cores, 4 virtual cores) laptop.
I noticed while doing heavy computing from there, my CPU usage seen in the task monitor is very low (~15%).
When going on the details for each process, the VBoxHeadless.exe actually uses 24% of the processor, which matches docker stats command which yields 97-100% CPU usage, and therefore makes sense from a single-core operation point of view.
My actual issue is that even though on thread is filled in terms of CPU time, windows (I guess) does not decide that it may actually be useful to speed up the CPU, and therefore it runs at 1.7GHz (with other apps in high performance mode, I usually hit the max 3.5GHz that the computer is capable of).
Therefore, how can I induce the higher clock speeds (nominal 2.7GHz or max 3.5GHZ) (considering that they would probably double my single threaded speed) from docker itself or inside windows 10?
You need to configure the docker machine running docker. If you haven't created a custom one, the default docker machine named 'default' will only have access to one cpu.
You can check all the configuration for this docker-machine by running:
docker-machine inspect default
You need to purge this default machine and recreate it:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" --virtualbox-cpu-count "2" --virtualbox-memory "2048" default
You can check all the avaible configuration options for the machine by running:
docker-machine create --help
Defining CPU Shares can help you but not exactly.
CPU limits are based on shares as these shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
e.g. The --cpu-shares parameter defines a share between 0-768. If a container defines a share of 768, while another defines a share of 256, the first container will have 50% share while the other one having 25% of the available share total.
Below the first container will be allowed to have 75% of the share. The second container will be limited to 25%.
docker run -d --name p1 --cpuset-cpus 0 --cpu-shares 768 image_name
docker run -d --name p2 --cpuset-cpus 0 --cpu-shares 256 image_name
sleep 5
docker stats --no-stream
docker rm -f p1 p2
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.

Docker CPU and memory too low

I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.

What are the Docker RUN params for mimicking IronWorker memory constraints?

In the past I've run into trouble when hosting my workers in a cloud infrastructure because of memory constraints that weren't faithfully reproduced when testing the code locally on my overpowered machine.
IronWorker is one such cloud provider that limits workers in its multi-tenant infrastructure to 380mb. Luckily with their switch to docker, I can hope to catch problems early on by asking my local docker container to use artificial memory limits when testing.
But I'm not sure as to which parameters from the following: https://docs.docker.com/engine/reference/run/ are the right ones to use when setting a 380mb limit ... any advice?
Does the logic from https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/#_example_managing_the_memory_shares_of_a_container still apply?
You'll want to use --memory, for example, based on the node README:
docker run --memory 380M --rm -e "PAYLOAD_FILE=hello.payload.json" -v "$PWD":/worker -w /worker iron/node node hello.js

How to restrict Docker containers to run on specific CPU cores or group of CPU cores

I have 4 CORE CPU. I wanted to restrict some 10 containers which I am running to 2 only cores and leave rest others free.
is it possible, how can I do so.
You can achieve this by using the cpuset constraints option when running the container.
Example from Docker reference docs:
$ docker run -ti --cpuset-cpus="1,3" ubuntu:14.04 /bin/bash
Meaning your container can run in CPUs 1 and 3 (0 and 2 will not be used).
There are other CPU parameters as well for the Docker run command.
Please see documentation for more details:
https://docs.docker.com/reference/run/#runtime-constraints-on-cpu-and-memory

Limit disk size and bandwidth of a Docker container

I have a physical host machine with Ubuntu 14.04 running on it. It has 100G disk and 100M network bandwidth. I installed Docker and launched 10 containers. I would like to limit each container to a maximum of 10G disk and 10M network bandwidth.
After going though the official documents and searching on the Internet, I still can't find a way to allocate specified size disk and network bandwidth to a container.
I think this may not be possible in Docker directly, maybe we need to bypass Docker. Does this means we should use something "underlying", such as LXC or Cgroup? Can anyone give some suggestions?
Edit:
#Mbarthelemy, your suggestion seems to work but I still have some questions about disk:
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
2) I use the command below to start the Docker daemon and container:
docker -d -s devicemapper
docker run -i -t training/webapp /bin/bash
then I use df -h to view the disk usage, it gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/docker-longid 9.8G 276M 9.0G 3% /
/dev/mapper/Chris--vg-root 27G 5.5G 20G 22% /etc/hosts
from the above I think the maximum disk a container can use is still larger than 10G, what do you think
?
I don't think this is possible right now using Docker default settings. Here's what I would try.
About disk usage: You could tell Docker to use the DeviceMapper storage backend instead of AuFS. This way each container would run on a block device (Devicemapper dm-thin target) limited to 10GB (this is a Docker default, luckily enough it matches your requirement!).
According to this link, it looks like latest versions of Docker now accept advanced storage backend options. Using the devicemapperbackend, you can now change the default container rootfs size option using --storage-opt dm.basesize=20G (that would be applied to any newly created container).
To change the storage backend: use the --storage-driver=devicemapper Docker option. Note that your previous containers won't be seen by Docker anymore after the change.
About network bandwidth : you could tell Docker to use LXC under the hoods : use the -e lxcoption.
Then, create your containers with a custom LXC directive to put them into a traffic class :
docker run --lxc-conf="lxc.cgroup.net_cls.classid = 0x00100001" your/image /bin/stuff
Check the official documentation about how to apply bandwidth limits to this class.
I've never tried this myself (my setup uses a custom OpenVswitch bridge and VLANs for networking, so bandwidth limitation is different and somewhat easier), but I think you'll have to create and configure a different class.
Note : the --storage-driver=devicemapperand -e lxcoptions are for the Docker daemon, not for the Docker client you're using when running docker run ........
New releases version has --device-read-bps and --device-write-bps.
You can use:
docker run --device-read-bps=/dev/sda:10mb
More info here:
https://blog.docker.com/2016/02/docker-1-10/
If you have access to the containers you can use tc for bandwidth control within them.
eg: in your entry point script you can add:
tc qdisc add dev eth0 root tbf rate 240kbit burst 300kbit latency 50ms
to have a bandwidth of 240kbps, burst 300kbps and 50 ms latency.
You also need to pass the --cap-add=NET_ADMIN to the docker run command if you are not running the containers as root.
1) Is it possible to allocate other size (such as 20G, 30G etc) to each container? You said it is hardcoded in Docker so it seems impossible.
to answer this question please refer to Resizing Docker containers with the Device Mapper plugin

Resources