Docker Scheduling Strategy Clarification - docker-swarm

I am new to docker, was playing around trying to run multiple containers on a swarm with spread strategy. Swarm had 3 nodes & each node already had 2 containers , and i ran 3 more docker run commands so that each node gets 1 more container.
As I was running docker containers without specifying memory using -m , nor CPU in the docker run command. After doing so i ran "docker info" but i noticed that Reserved memory was 0.
I re ran container by specifying -m , this time i noticed that Reserved memory is not 0.
Question is : when running containers without using -m command , why is the Reserved memory is shown 0 , although the running containers ( spawned without -m ) still would be using memory ... For Example , if there are 10 containers already running on a single node (i.e the host memory would be full) and i try to start a container on same host machine by giving -m 1G it may not have space to reserve ,hence will it fail ?

You are correct. When you don't specify the reserved memory with -m memory is still used.
You see 0 because this is the default when reserved memory is not specified. That number is not the real consumption of memory on the node. So 0 makes sense, as a setting rather than a real time report.
This was discussed on issue #1819 in the docker swarm project.

Related

Increase the RAM of a docker container itself (NOT DOCKER)

Hey guys I am running WSL 2 with a docker container running on the WSL2 but the docker container itself (its a KVM running QEMU VM) is limited to 4 GB I need a lot more than 4GB I need alt east 8 GB to run the things I want to run in QEMU VM (Theres a reason why I am running QEMU and no I cannot go without it)
I am running docker desktop and if I inspect the docker container it says the following
I have edited the .wslconfig file. and set the limit to 20GB as well as set the swap file to 1 and I have tried a command with docker run insert_docker_name_here it --memory 8000 -m
that then says it cant find the docker container for some reason but its under docker desktop.
I have tried looking on the internet for an answer but everything seems to point to .wslconfig file, --memory, or some vague answer that doesnt help at all, is there a way I can edit my docker container file and set it to use 8GB or more??
Please help - I am new to docker and would appreciate the asssistance
There is no memory limit on docker containers by default. Using --memory will only specify the upper limit of memory the container can use. You need to examine how the docker container is started and remove any limit there.
Also the WSL2 has no memory limit by default and will just grab as much memory as it needs. The value in .wslconfig is also the upper limit. If you just remove all the limits, it should just use all available memory.
This leaves QEMU itself. Have you checked what the guest RAM size is? (-m parameter on the commandline)

There is a way to pass --cpus-cpuset and --cpuset-mems to docker swarm

I was looking through the documentation and it seems than it is not possible to isolate processes in Docker swarm to a specific core, like when you use numactl or cpuset--cpus. In docker run you do it like this (16 cpu machine, use 8 cpus on second socket from 8-15):
/usr/bin/docker run --detach --name myproc --cpus 8 --cpuset-cpus 8-15 --cpuset-mems 1 -- privateregistry:5000/myimage:v1 -c '/bin/myverycpuintensiveprocess.sh'
And I can confirm the processes do not jump from core to core but stay pinned on CPUS 8-15. Also they will use memory from socket 1 as well.
From the 'create service' documentation I see than the closest you have is '--reserve-cpu' and --reserve-memory' but that's only to control container placement.
Is this level of control banned from Docker Swarm? I was also looking at K8s and it seems to have the same limitations.
Thanks,
It is not supported at the moment, it is a issue on GitHub, people should vote for this feature.

Memsql crashing when running out of memory

We are using memsql for real time analytics purpose. We have created Kafka Connector which is continuously consuming from Kafka and pushing to memsql. After sometime memsql is crashing due to memory issues.
Is there any way to monitor how much memory memsql consumes and how it autoscales.
(by your 'docker' tag, I am guessing that you run 'memsql' in a container)
Assuming that you have access to the host where your 'memsql' container is running and you know the container you want to monitor, the memory usage information is available from the kernel's cgroups stats on the host.
You could run the following on the Docker host system:
container="your_container_name_or_ID"
set -- `docker inspect $container --format "{{.Id}} {{.HostConfig.CgroupParent}}"`
id="$1"
cg="$2"
if [ -z "$cg" ] ; then
cg=/docker
fi
cat /sys/fs/cgroup/memory${cg}/${id}/memory.max_usage_in_bytes
This will print the peak memory usage of all processes running in the container's cgroup (that is, the initial process started by the container's ENTRYPOINT and any child processes that it might have ran).
If you want to check how much memory your memsql nodes are using run:
select * from information_schema.mv_nodes
There is a more details about how memsql uses memory here:
https://help.memsql.com/hc/en-us/articles/115001091386-What-Is-Using-Memory-on-My-Leaves-

How can I make Docker trigger higher CPU frequencies

OK, so my title may not actually be linked to a possible solution, however this is my problem.
I am running a Python 3 Jupyter notebook inside a docker container in from my windows 10 kaby-lake (2 physical cores, 4 virtual cores) laptop.
I noticed while doing heavy computing from there, my CPU usage seen in the task monitor is very low (~15%).
When going on the details for each process, the VBoxHeadless.exe actually uses 24% of the processor, which matches docker stats command which yields 97-100% CPU usage, and therefore makes sense from a single-core operation point of view.
My actual issue is that even though on thread is filled in terms of CPU time, windows (I guess) does not decide that it may actually be useful to speed up the CPU, and therefore it runs at 1.7GHz (with other apps in high performance mode, I usually hit the max 3.5GHz that the computer is capable of).
Therefore, how can I induce the higher clock speeds (nominal 2.7GHz or max 3.5GHZ) (considering that they would probably double my single threaded speed) from docker itself or inside windows 10?
You need to configure the docker machine running docker. If you haven't created a custom one, the default docker machine named 'default' will only have access to one cpu.
You can check all the configuration for this docker-machine by running:
docker-machine inspect default
You need to purge this default machine and recreate it:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" --virtualbox-cpu-count "2" --virtualbox-memory "2048" default
You can check all the avaible configuration options for the machine by running:
docker-machine create --help
Defining CPU Shares can help you but not exactly.
CPU limits are based on shares as these shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
e.g. The --cpu-shares parameter defines a share between 0-768. If a container defines a share of 768, while another defines a share of 256, the first container will have 50% share while the other one having 25% of the available share total.
Below the first container will be allowed to have 75% of the share. The second container will be limited to 25%.
docker run -d --name p1 --cpuset-cpus 0 --cpu-shares 768 image_name
docker run -d --name p2 --cpuset-cpus 0 --cpu-shares 256 image_name
sleep 5
docker stats --no-stream
docker rm -f p1 p2
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.

Docker CPU and memory too low

I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.

Resources