so i have a docker container and its ram usage grows over time like if its leaking memory.
first i thought its because my app is leaking memory but in local tests i saw nothing (app is written in rust with axum) then i checked the process itself and its ram usage is different than what docker says.
docker stats says container is using 50 mb but htop says something else
Related
After some research, I found out that, kubernetes logs -f <pod> reads logs from files, i.e, .log files to which docker containers running inside the pods have written the logs. In my case, docker container is an application that I have written. Now, say I have disabled logging in my application expecting that RAM usage on the system will reduce.
With logging enabled in my application, I kept track of CPU and MEM usage
Commands used:
a. top | grep dockerd
b. top | grep containerd-shim
Without logging enabled also, I kept track of CPU and MEM usage.
But I didn't find any difference. Could anyone explain what is happening here internally?
The simple explanation:
Logging doesn't use a lot of RAM. As you said, the logs are written to disk as soon as they are processed so they are not stored in memory other than just one time variables used per log entry.
Logging doesn't use a lot of CPU cycles. The CPU is typically at least 10 to the -5 orders of magnitude faster than disk (less for SSDs) so your CPU can do a lot more when logs are written to disk. In other words, you'll barely notice the difference when disabling the logs in terms of CPU usage.
I have a flask web server running with docker-compose.
When the container first starts, it starts by using around 200MB of memory but after some use goes up to 1GB (docker stats).
However once it reaches high consumption, even when idle, the memory usage of the container is not decreasing and eventually hitting the limit - causing dead uwsgi workers and stopped processes.
Can someone explain what happens behind the curtain and how to have the container release unused memory?
Looks like this a bug and it is being tracked
https://github.com/kubernetes/kubernetes/issues/70179
I'm working on project where I divided the application in multiple docker images and I'm running around 5 containers where each one has its own image. Following the "One process per container" rule.
For that I'm using a beaglebone black which has only 480Mb of memory. Sometimes when the application runs for some time it crashes due to Out of memory exception.
So I was wondering if I make the images smaller would it consume less memory? How is the memory allocated for each container?
What if I group some images/containers into a single running container with more than one process? Would it use less memory?
When a process is killed with an OOM exception this is not related to the docker image size, this is the amount of memory the process is trying to use.
You can specify some memory limits on each container when you run them.
For example this will limit your container to 100MB of memory.
docker run -m 100M busybox
However if your applications exceed their allocated memory they will be killed with an OOM exception. The problem you are having is likely because the applications you are running have a minimum requirement which is higher than your beaglebone black.
Grouping processes into one container will not help, they will still use the same amount of memory.
We're running Docker containers of NiFi 1.6.0 in production and have to come across a memory leak.
Once started, the app runs just fine, however, after a period of 4-5 days, the memory consumption on the host keeps on increasing. When checked in the NiFi cluster UI, the JVM heap size used hardly around 30% but the memory on the OS level goes to 80-90%.
On running the docker starts command, we found that the NiFi docker container is consuming the memory.
After collecting the JMX metrics, we found that the RSS memory keeps growing. What could be the potential cause of this? In the JVM tab of cluster dialog, young GC also seems to be happening in a timely manner with old GC counts shown as 0.
How do we go about identifying in what's causing the RSS memory to grow?
You need to replicate that in a non-docker environment, because with docker, memory is known to raise.
As I explained in "Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container", docker has some bugs (like issue 10824 and issue 15020) which prevent an accurate report of the memory consumed by a Java process within a Docker container.
That is why a plugin like signalfx/docker-collectd-plugin mentions (two weeks ago) in its PR -- Pull Request -- 35 to "deduct the cache figure from the memory usage percentage metric":
Currently the calculation for memory usage of a container/cgroup being returned to SignalFX includes the Linux page cache.
This is generally considered to be incorrect, and may lead people to chase phantom memory leaks in their application.
For a demonstration on why the current calculation is incorrect, you can run the following to see how I/O usage influences the overall memory usage in a cgroup:
docker run --rm -ti alpine
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
dd if=/dev/zero of=/tmp/myfile bs=1M count=100
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You should see that the usage_in_bytes value rises by 100MB just from creating a 100MB file. That file hasn't been loaded into anonymous memory by an application, but because it's now in the page cache, the container memory usage is appearing to be higher.
Deducting the cache figure in memory.stat from the usage_in_bytes shows that the genuine use of anonymous memory hasn't risen.
The signalFX metric now differs from what is seen when you run docker stats which uses the calculation I have here.
It seems like knowing the page cache use for a container could be useful (though I am struggling to think of when), but knowing it as part of an overall percentage usage of the cgroup isn't useful, since it then disguises your actual RSS memory use.
In a garbage collected application with a max heap size as large, or larger than the cgroup memory limit (e.g the -Xmx parameter for java, or .NET core in server mode), the tendency will be for the percentage to get close to 100% and then just hover there, assuming the runtime can see the cgroup memory limit properly.
If you are using the Smart Agent, I would recommend using the docker-container-stats monitor (to which I will make the same modification to exclude cache memory).
Yes, NiFi docker has memory issues, shoots up after a while & restarts on its own. On the other hand, the non-docker works absolutely fine.
Details:
Docker:
Run it with 3gb Heap size & immediately after the start up it consumes around 2gb. Run some processors, the machine's fan runs heavily & it restarts after a while.
Non-Docker:
Run it with 3gb Heap size & it takes 900mb & runs smoothly. (jconsole)
Im using the "gci" container optimised vm image running on GCP.
My program has a spike in disk reads, and I think RAM, and then crashes.
The problem is I cannot see RAM usage, only disk and CPU.
I cannot install any utilities on the "gci" vm, I can only run tools inside a Debian based container "toolbox".
How do I record RAM usage?
There are several commands in Linux that can be used to check RAM. For example vmstat, top, free, /prox/meminfo. See this link: https://www.linux.com/blog/5-commands-check-memory-usage-linux