I'm trying to get a breakdown of the memory usage of my pods running on Kubernetes. I can see the pod's memory usage through kubectl top pod but what I need is a total breakdown of where the memory is used.
My container might download or write new files to disk, so I'd like to see at a certain moment how much of the used memory is used for each file and how much is used by software running.
currently, there's no real disk but just the TempFS, so that means every file is consuming the allocated memory resources, that is okay as long as I can inspect and know how much memory is where.
Couldn't find anything like that, it seems that cAdvisor helps to get memory statics but it just uses docker/cgroups which doesn't give a breakdown as I described.
A better solution would be to install a metrics server along with Prometheus and Grafana in your cluster. Prometheus will scrape the metrics which can be used by Grafana for displaying as graphs. This might be useful.
If you want the processes consumption inside the container, you can go into the container and monitor the processes.
$ docker exec -it <container-name> watch ps -aux
Moreover, you can check docker stats.
Following Linux command will summarize the sizes of the directories:
$ du -h
Related
I am trying to get my container to run on multiple CPU's.
To achieve this on Windows, I changed .wslconfig to have:
[wsl2]
memory= 8GB
processors=4
Using docker stats, I can see the available RAM reduced from 12.xx to 7.764, so I can see this file has changed the behaviour.
However, if I run my container with the following command:
docker run -d --cpus="4" "CONTAINERNAME"
and then I check the stats using docker stats, I still see the container using 100% CPU% at maximum. Since the contianer has more cpu's available, I was expecting this to now be able to go above the 100%.
What am I doing wrong?
I wonder why Docker Desktop Dashboard and docker stats command show very different memory usage values?
Notice Memory Usage is the only significant difference, all others are pretty close.
We are using memsql for real time analytics purpose. We have created Kafka Connector which is continuously consuming from Kafka and pushing to memsql. After sometime memsql is crashing due to memory issues.
Is there any way to monitor how much memory memsql consumes and how it autoscales.
(by your 'docker' tag, I am guessing that you run 'memsql' in a container)
Assuming that you have access to the host where your 'memsql' container is running and you know the container you want to monitor, the memory usage information is available from the kernel's cgroups stats on the host.
You could run the following on the Docker host system:
container="your_container_name_or_ID"
set -- `docker inspect $container --format "{{.Id}} {{.HostConfig.CgroupParent}}"`
id="$1"
cg="$2"
if [ -z "$cg" ] ; then
cg=/docker
fi
cat /sys/fs/cgroup/memory${cg}/${id}/memory.max_usage_in_bytes
This will print the peak memory usage of all processes running in the container's cgroup (that is, the initial process started by the container's ENTRYPOINT and any child processes that it might have ran).
If you want to check how much memory your memsql nodes are using run:
select * from information_schema.mv_nodes
There is a more details about how memsql uses memory here:
https://help.memsql.com/hc/en-us/articles/115001091386-What-Is-Using-Memory-on-My-Leaves-
OK, so my title may not actually be linked to a possible solution, however this is my problem.
I am running a Python 3 Jupyter notebook inside a docker container in from my windows 10 kaby-lake (2 physical cores, 4 virtual cores) laptop.
I noticed while doing heavy computing from there, my CPU usage seen in the task monitor is very low (~15%).
When going on the details for each process, the VBoxHeadless.exe actually uses 24% of the processor, which matches docker stats command which yields 97-100% CPU usage, and therefore makes sense from a single-core operation point of view.
My actual issue is that even though on thread is filled in terms of CPU time, windows (I guess) does not decide that it may actually be useful to speed up the CPU, and therefore it runs at 1.7GHz (with other apps in high performance mode, I usually hit the max 3.5GHz that the computer is capable of).
Therefore, how can I induce the higher clock speeds (nominal 2.7GHz or max 3.5GHZ) (considering that they would probably double my single threaded speed) from docker itself or inside windows 10?
You need to configure the docker machine running docker. If you haven't created a custom one, the default docker machine named 'default' will only have access to one cpu.
You can check all the configuration for this docker-machine by running:
docker-machine inspect default
You need to purge this default machine and recreate it:
docker-machine rm default
docker-machine create -d virtualbox --virtualbox-disk-size "400000" --virtualbox-cpu-count "2" --virtualbox-memory "2048" default
You can check all the avaible configuration options for the machine by running:
docker-machine create --help
Defining CPU Shares can help you but not exactly.
CPU limits are based on shares as these shares are a weight between how much processing time one process should get compared to another. If a CPU is idle, then the process will use all the available resources. If a second process requires the CPU then the available CPU time will be shared based on the weighting.
e.g. The --cpu-shares parameter defines a share between 0-768. If a container defines a share of 768, while another defines a share of 256, the first container will have 50% share while the other one having 25% of the available share total.
Below the first container will be allowed to have 75% of the share. The second container will be limited to 25%.
docker run -d --name p1 --cpuset-cpus 0 --cpu-shares 768 image_name
docker run -d --name p2 --cpuset-cpus 0 --cpu-shares 256 image_name
sleep 5
docker stats --no-stream
docker rm -f p1 p2
It's important to note that a process can have 100% of the share, no matter defined weight, if no other processes are running.
I am newbie to Docker world. I could successfully build and run container with Tomcat. But performance is very poor. I logged into running system and found that only 2 cpu cores and 4 GB RAM is allocated. Is it one of reason for bad performance, if so how can I allocate more resources.
I tried following command, but no luck..
docker run --rm -c 3 -p 32772:8080 --memory=8Gb -d helloworld
Any pointer will be helpful.
thanks in advance.
Do you use Docker for Windows/Mac? Then you can change it in the settings (Docker icon in the taskbar).
On Windows, Docker runs in Hyper-V without dynamic memory, so the memory will not be avalible to your system even if it isn't used.
With docker info you can find out how many resources are avalible.
The bad performace may also be caused by very slow file access on Docker for Mac.
On Linux, Docker has no upper limit by default.
The cpu and memory args of docker run limit the resources for one container, if they are not set there is no upper limit.