How can I measure process's memory usage - memory

I measure process's memory to use following command
ps -eo size,pid,user,command --sort -size | head -n 3
Then process's memory is increased.
But I use free command don't increase computer's total memory usage.
some people use ps command with size keyword but other people use ps command with rss & vsz keywords.
How can I measure process's actual memory usage?

Related

Why Docker Dashboard and docker stats show different memory usage values?

I wonder why Docker Desktop Dashboard and docker stats command show very different memory usage values?
Notice Memory Usage is the only significant difference, all others are pretty close.

What unit does the docker run "--memory" option expect?

I'd like to constrain the memory of a Docker container to 1 GB. According to the documentation, we can specify the desired memory limit using the --memory option:
$ docker run --memory <size> ...
However, the documentation does not describe the format or units for the argument anywhere on the page:
--memory , -m Memory limit
What units should I supply to --memory and other related options like --memory-reservation and --memory-swap? Just bytes?
Classic case of RTFM on my part. The --memory option supports a unit suffix so we don't need to calculate the exact byte number:
-m, --memory=""
Memory limit (format: <number>[<unit>], where unit = b, k, m or g)
Allows you to constrain the memory available to a container. If the
host supports swap memory, then the -m memory setting can be larger
than physical RAM. If a limit of 0 is specified (not using -m), the
container's memory is not limited. The actual limit may be rounded up
to a multiple of the operating system's page size (the value would be
very large, that's millions of trillions).
So, to start a container with a 1 GB memory limit as described in the question, both of these commands will work:
$ docker run --memory 1g ...
$ docker run --memory 1073741824 ...
The --memory-reservation and --memory-swap options also support this convention.
Taken from the docker documentation:
Limit a container’s access to memory Docker can enforce hard memory
limits, which allow the container to use no more than a given amount
of user or system memory, or soft limits, which allow the container to
use as much memory as it needs unless certain conditions are met, such
as when the kernel detects low memory or contention on the host
machine. Some of these options have different effects when used alone
or when more than one option is set.
Most of these options take a positive integer, followed by a suffix of
b, k, m, g, to indicate bytes, kilobytes, megabytes, or gigabytes.
This page also includes some extra information about memory limits when running docker on Windows.
docker run -m 50m <imageId> <command...>
This is how it should be given. This forces the docker container to use 50m of memory. As soon as it tries to use more than that, it will be shut down.
However using free -m you won't be able to see anything related to the container memory usage. you have to go inside it to see allowed memory.

How memory allocation is happening in docker

I am having a Docker image of virtual size 6.5 GB
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
Image_Name latest d8dcd701981e About an hour ago 6.565 GB
but the RAM in my system is only 4GB , the container is working at a good speed though , I am really confused as how the RAM allocation is done for the docker containers . Is there any limit to the RAM size being allocated to a container as in the end docker container is just another isolated process running in the Operating system.
The virtual size of an image has nothing to do with memory allocation
If your huge image, once launched as a container, does very little, it won't reserve much memory (not consume much CPU)
For more on memory allocation, see this answer: you can limit at runtime the maximum memory allocation. And that, independently of the image size.
For example:
$ docker run -ti -m 300M --memory-swap -1 ubuntu:14.04 /bin/bash
We set memory limit and disabled swap memory limit, this means the processes in the container can use 300M memory and as much swap memory as they need (if the host supports swap memory).

Docker container CPU allocation

I have created a container:
docker run -c=20 -i -t ubuntu:latest /bin/bash
I tried to use -c flag to control CPU usage and maximize it in 50 %. When I am running md5sum /dev/urandom inside container, it use up 100 % CPU in host machine.
The -c flag for docker run command modifies the container’s CPU share weighting relative to the weighting of all other running containers.
It does not restrict the container's use of CPU from the host machine.
You can use the --cpu-quota flag to limit CPU usage, for example:
$ docker run -ti --cpu-quota=50000 ubuntu:latest /bin/bash
The --cpu-quota is usually used in conjunction with --cpu-period. Please see more details on the Docker run reference document:
https://docs.docker.com/reference/run/#runtime-constraints-on-resources
It seems that you are running a single container, so this is the expected result.
You might find this blog post helpful.
Every new container will have 1024 shares of CPU by default. This
value does not mean anything, when speaking of it alone. But if we
start two containers and both will use 100% CPU, the CPU time will be
divided equally between the two containers because they both have the
same CPU shares (for the sake of simplicity I assume that there are no
other processes running).
Take a look here, this is apparently what you were looking for:
https://docs.docker.com/engine/reference/run/#cpu-period-constraint
The default CPU CFS (Completely Fair Scheduler) period is 100ms. We can use --cpu-period to set the period of CPUs to limit the container’s CPU usage. And usually --cpu-period should work with --cpu-quota.
Examples:
$ docker run -it --cpu-period=50000 --cpu-quota=25000 ubuntu:14.04 /bin/bash
If there is 1 CPU, this means the container can get 50% CPU worth of run-time every 50ms.
period and quota definition:
Within
each given "period" (microseconds), a group is allowed to consume only up to
"quota" microseconds of CPU time. When the CPU bandwidth consumption of a
group exceeds this limit (for that period), the tasks belonging to its
hierarchy will be throttled and are not allowed to run again until the next
period.

Docker CPU percentage

Is there any way that I can get the cpu percentage inside docker container and not outside of it?! docker stats DOCKER_ID shows the percentage which is exactly what I need but I need it as variable. I need to get cpu percentage inside the container itself and do some operation with it.
I have looked into different stuff such as cgroup and docker rest API, but they do not provide cpu percentage. If there is a way to get the cpu percentage inside the container and not outside of it will be perfect. I found one solution provided by someone in below link, which is still outside the container by the rest api, however I did not really get it how to calculate the percentage.
Get Docker Container CPU Usage as Percentage
You can install Google cAdvisor with Axibase Time-Series Database storage driver. It will collect and store CPU utilization measured both in core units as well as in percentages.
Screenshots with examples of how CPU is reported are located at the bottom of the page: https://axibase.com/products/axibase-time-series-database/writing-data/docker-cadvisor/
In a centralized configuration, the ATSD container itself can ingest metrics from multiple cAdvisor instances installed on multiple docker hosts.
EDIT 1: One liner to compute total CPU usage of all processes running inside the container. Adjust -d parameter to change the interval between samples to smooth spikes out:
top -b -d 5 -n 2 | awk '$1 == "PID" {block_num++; next} block_num == 2 {sum += $9;} END {print sum}'
I have used ctop which gives a more graphical way than docker_stats
But I found that it was showing CPU percentage way higher than what Top was showing for the system. Basically it is showing relative to the root process. Docker containers run as child process
To illustrate with an example
First find the root process under which all the containers run
docker-containerd-shim -
..the Docker architecture is broken into four components: Docker engine, containerd, containerd-shm and runC. The binaries are respectively called docker, docker-containerd, docker-containerd-shim, and docker-runc.
- https://hackernoon.com/docker-containerd-standalone-runtimes-heres-what-you-should-know-b834ef155426
root 1843 1918 0 Aug31 ? 00:00:00 docker-containerd-shim 611bd9... /var/run/docker/libcontainerd/611bd92.... docker-runc
You can see all the containers that are running using the command
pstree -p 1918
Now say that we are interested in seeing the CPU consumption of fluentdb.
Easy way to get the child pid of this is
pstree -p 1918 |grep fluentd
Which gives 21670
Now you can run top -p 21670 to see the CPU share of this child process also top -p 1918 to see the overall CPU of the parent process.
With cadvisor collecting to Promethus and view in Grafana, this was the closest and most accurate representation of the actual CPU percentage used by the container; in relation to the host machine. This diagram illustrates this.
cTop and docker stats give 23% as the CPU percentage. Actual CPU percentage of the docker parent process is around 2% and cAdvisor output from Grafana shows the most 'accurate' value of the container CPU percentage related to host.

Resources