Steps to reproduce
Tell us about your environment:
Puppeteer version:1.6.1
Platform / OS version: linux
URLs (if applicable):
Node.js version: 8
What steps will reproduce the problem?
deployed docker in linux. We make a health check of screenshot per minutes. The problem is that the docker cache memory is increasing all the time even we disable nearly all the cache, although rss did not increase.
this is part of code below:
const browser = await puppeteer.launch({ args: ['--no-sandbox','--disable-dev-shm-usage','--media-cache-size=1','--disk-cache-size=1','--disable-application-cache','--disable-session-storage','--user-data-dir=/dev/null']})
await page.setCacheEnabled(false);
But if we execute "# sync; echo 2 > /proc/sys/vm/drop_caches" to clear dentries and inodes the cache memory will be decreasing rapidly. But we have disabled chrome to write cache. So we don't know what makes cache memory grow.
I think BMitch is right. linux will grow the disk cache from unused ram. And docker.stats will include disk cache. The reason why docker.stats is always growing is the disk cache is growing. And ec2 will monitor the docker.stats. I think this is ec2's problem. Thx guys.
Related
Using Docker Desktop (19.03.13) with 6 containers in Windows 10. Having 16GB RAM.
In docker stats each container consumes 20-500 mb, all together cunsume ~1gb.
But in the Task Manager docker eats ~10gb and crashes from the lack of system memory.
How to check, what consumes so much memory in docker?
And how to prevent this?
Try to create a .wslconfig file at the root of your User folder C:\Users\<my-user> to adjust how much memory & processors Docker will use.
This is the content of the .wslconfig file.
[wsl2]
memory=2GB # Limits VM memory in WSL 2 up to 2GB
processors=2# Makes the WSL 2 VM use two virtual processors
Then, restart the computer. You will find the Vemm process will only take the amount of RAM you defined previously.
You can learn more here here
I guess you are using the new WSL 2 based engine, try switching docker engine back to Hyper-V by going opening docker settings -> general -> uncheck Use WSL 2 based Engine .
To explain:
I noticed it started happening to me since WSL 2 engine was introduced, i automatically switched to it since it's a new engine; Memory issues started arising since then.
Restarting/closing docker did not free the memory and i noticed in task Manager Vemm was the one eating all memory, so had to force close it (caused docker not to work).
Last thing i did was switching docker engine back to Hyper-V solved my high memory usage.
If you are using WSL2 put into the .wslconfig the middle of your ram. I don't know why but I had the same problem with 8GB RAM.
This is my .wslconfig
[wsl2]
memory=4GB # I have 8GB RAM
processors=2
And the result was good because the consumption is good! In this moment I have running a Docker with 8 images:
Although this problem is already marked as SOLVED
There is still another reason for this, in recently updated versions.
You might enable too many resources for docker hyperkit.
Go to settings - resources - advanced
check if you spared too much resource there.
I have my docker taking less than 2% cpu now.
After updating .wslconfig to be:
[wsl2]
memory=8GB
swap=2000
processors=4
... and then restarting Docker, the CPU consumption was still over 80% and there were 5 Docker Desktop processes (each taking 17-18%) in Windows Task Manager. I reset Docket to Factory and still the CPU pegged at 80% or more.
I then deleted the .docker folder (in windows the path is %USERPROFILE%/.docker) as suggested by jmichalek-fp. I took care to do a Shift-DEL so as not to move it to the recylce bin because I remember in the past recycled items were still found by processes that hold a link to the file.
After Factory Reset, then increasing .wslconfig resources, then deleting .docker folder and then restarting Docker, it is now running only one Docker Desktop process, and, with a NodeJs app running in it, it is consuming between 0.5% and 2% CPU.
I found "delete .docker folder" in this github issue: https://github.com/docker/for-win/issues/12266
As I know docker stats does not show RAM reservations. Try to put RAM limits using -m flag. There are some information how to control resources using docker:
https://docs.docker.com/config/containers/resource_constraints/?spm=a2c41.12663380.0.0.59ed566dAqUZPu
I am guessing on Windows there is something similar to what exists on MacOS.
Open your docker app and go to the dashboard
Click any container
Click Stats
You will get information regarding your CPU, RAM Usage, Disk Read & Write Memory & Network usage.
When I had memory issues, which I used to frequently, I would setup alias scripts that I could chain together to stop/kill/restart and do what ever setup I needed on the containers.
There is no preventing docker behaving the way it behaves unless you want to start contributing to and making pull requests. This isn't an uncommon issue. Docker is a free service, I recommend working around it's short comings.
I am running my applications on a bare-metal kubernetes cluster which uses Ubuntu 18.04. For a long time I had problems with cached memory. Some of my components were caching memory and although the used memory was around 1% of the machine because the cached was around 90% kubelet was evicting all the pods on that machine.
Recently I also faced disk pressure which was caused by the log files (at /var/lib/docker/containers/*/*-json.log) of pods running on my machines. After I activate the log rotation of docker via adding:
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
to the /etc/docker/daemon.json I noticed an interesting side effect. As you can see from the below chart, at the same time that I added the log rotation the cache memory was also disappeared. My question is, what is the relation between the docker log files and the cache memory?
Linux caches disk access to RAM to speed up future read requests, this is expected and desirable behavior. When applications need more RAM, this disk cache may be pruned. Also, if large files are deleted, I'd expect their cache would also be removed.
The issue here is whether you count this disk cache when looking at available memory. Typically you don't, since applications can use that memory when needed. But some tools like Kubernetes appear to count it when evicting pods: https://github.com/kubernetes/kubernetes/issues/43916
I am working on a java service that basically creates files in a network file system to store data. It runs in a k8s cluster in a Ubuntu 18.04 LTS.
When we began to limit the memory in kubernetes (limits: memory: 3Gi), the pods began to be OOMKilled by kubernetes.
At the beginning we thought it was a leak of memory in the java process, but analyzing more deeply we noticed that the problem is the memory of the kernel.
We validated that looking at the file /sys/fs/cgroup/memory/memory.kmem.usage_in_bytes
We isolated the case to only create files (without java) with the DD command like this:
for i in {1..50000}; do dd if=/dev/urandom bs=4096 count=1 of=file$i; done
And with the dd command we saw that the same thing happened ( the kernel memory grew until OOM).
After k8s restarted the pod, I got doing a describe pod:
Last State:Terminated
Reason: OOMKilled
Exit Code: 143
Creating files cause the kernel memory grows, deleting those files cause the memory decreases . But our services store data , so it creates a lot of files continuously, until the pod is killed and restarted because OOMKilled.
We tested limiting the kernel memory using a stand alone docker with the --kernel-memory parameter and it worked as expected. The kernel memory grew to the limit and did not rise anymore. But we did not find any way to do that in a kubernetes cluster.
Is there a way to limit the kernel memory in a K8S environment ?
Why the creation of files causes the kernel memory grows and it is not released ?
Thanks for all this info, it was very useful!
On my app, I solved this by creating a new side container that runs a cron job, every 5 minutes with the following command:
echo 3 > /proc/sys/vm/drop_caches
(note that you need the side container to run in privileged mode)
It works nicely and has the advantage of being predictable: every 5 minutes, your memory cache will be cleared.
We're running Docker containers of NiFi 1.6.0 in production and have to come across a memory leak.
Once started, the app runs just fine, however, after a period of 4-5 days, the memory consumption on the host keeps on increasing. When checked in the NiFi cluster UI, the JVM heap size used hardly around 30% but the memory on the OS level goes to 80-90%.
On running the docker starts command, we found that the NiFi docker container is consuming the memory.
After collecting the JMX metrics, we found that the RSS memory keeps growing. What could be the potential cause of this? In the JVM tab of cluster dialog, young GC also seems to be happening in a timely manner with old GC counts shown as 0.
How do we go about identifying in what's causing the RSS memory to grow?
You need to replicate that in a non-docker environment, because with docker, memory is known to raise.
As I explained in "Difference between Resident Set Size (RSS) and Java total committed memory (NMT) for a JVM running in Docker container", docker has some bugs (like issue 10824 and issue 15020) which prevent an accurate report of the memory consumed by a Java process within a Docker container.
That is why a plugin like signalfx/docker-collectd-plugin mentions (two weeks ago) in its PR -- Pull Request -- 35 to "deduct the cache figure from the memory usage percentage metric":
Currently the calculation for memory usage of a container/cgroup being returned to SignalFX includes the Linux page cache.
This is generally considered to be incorrect, and may lead people to chase phantom memory leaks in their application.
For a demonstration on why the current calculation is incorrect, you can run the following to see how I/O usage influences the overall memory usage in a cgroup:
docker run --rm -ti alpine
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
dd if=/dev/zero of=/tmp/myfile bs=1M count=100
cat /sys/fs/cgroup/memory/memory.stat
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You should see that the usage_in_bytes value rises by 100MB just from creating a 100MB file. That file hasn't been loaded into anonymous memory by an application, but because it's now in the page cache, the container memory usage is appearing to be higher.
Deducting the cache figure in memory.stat from the usage_in_bytes shows that the genuine use of anonymous memory hasn't risen.
The signalFX metric now differs from what is seen when you run docker stats which uses the calculation I have here.
It seems like knowing the page cache use for a container could be useful (though I am struggling to think of when), but knowing it as part of an overall percentage usage of the cgroup isn't useful, since it then disguises your actual RSS memory use.
In a garbage collected application with a max heap size as large, or larger than the cgroup memory limit (e.g the -Xmx parameter for java, or .NET core in server mode), the tendency will be for the percentage to get close to 100% and then just hover there, assuming the runtime can see the cgroup memory limit properly.
If you are using the Smart Agent, I would recommend using the docker-container-stats monitor (to which I will make the same modification to exclude cache memory).
Yes, NiFi docker has memory issues, shoots up after a while & restarts on its own. On the other hand, the non-docker works absolutely fine.
Details:
Docker:
Run it with 3gb Heap size & immediately after the start up it consumes around 2gb. Run some processors, the machine's fan runs heavily & it restarts after a while.
Non-Docker:
Run it with 3gb Heap size & it takes 900mb & runs smoothly. (jconsole)
I have created some services in spring boot, I have 11 fat jars and I deploy them in docker containers, my doubt was that every jar was consuming between 1 and 1.5 GB of RAM without any use, I check the RAM by running:
docker stats containername
At first I thought that it was the java container and I tried to change to one that uses alpine but nothing changed, so I think the only problem is my jar. Is there a way to change the RAM that the jar is using? Or this behavior is normal because every jar has an embedded tomcat? Or maybe is better to put some jars together and deploy them as war and use only one tomcat for a group of "jars"? Can someone share his/her experience?,
Thanks in advance.
This is how Java behaves in general. The JVM takes as much memory as you give it, and it will perform a process called Garbage collection (What is the garbage collector in Java) to free up space once it decides it should do so.
However, if you don't tell your JVM how much memory it can use, it will use the system defaults, which depend on your systems memory and the amount of cores you have. You can verify this using the following command (How is the default Java heap size determined):
java -XX:+PrintFlagsFinal -version | grep HeapSize
On my machine, that's an initial heap memory of 256MiB and a maximum heap size of 4GiB. However, that doesn't mean that your application needs it.
A good way of measuring your memory is by using a monitoring tool like jvisualvm. Additionally, you could use actuator's /health endpoint to see the heap memory usage as well.
Your heap memory usage will normally have a sawtooth pattern (Why a sawtooth shaped graph), where the memory is gradually being used, and eventually freed by the garbage collector.
The memory that is left over after a garbage collection are usually objects that cannot be destroyed because they're still in use. You could see this as your working memory. Now, to configure your -Xmx you'll have to see how your application behaves after trying it out:
Configure it below your normal memory usage and your application will go out of memory, throwing an OutOfMemoryError.
Configure it too low but above your minimal memory usage, and you will see a huge performance hit, due to the garbage collector continuously having to free memory.
Configure it too high and you'll reserve memory you won't need in most of the cases, so wasting too much resources.
From the screenshot above, you can see that my application reserves about 1GiB of memory for heap usage, while it only uses about 30MiB after a garbage collection. That means that it has a way too high -Xmx value, so we could change it to different values and see how the application behaves.
People often prefer to work in powers of 2 (even though there is no limitation, as seen in jvm heap setting pattern). In my case, I need to go with at least 30MiB, since that's the amount of memory my application uses at all times. So that means I could try -Xmx32m, see how it performs, and adjust if it goes out of memory or performs worse.
You can set memory usage of docker container using -e JAVA_OPTS="-Xmx64M -Xms64M".
docker file:
FROM openjdk:8-jre-alpine
VOLUME ./mysql:/var/lib/mysql
ADD /build/libs/application.jar app.jar
ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar
image run:
docker run -d --name container-name -p 9100:9100 -e JAVA_OPTS="-Xmx512M -Xms512M" imagename:tag
Here i set 512Mb memory usage . you can set 1g or as per your requirement. After run using this check your memory usage. it will max 512Mb.
After taking a look into the openjkd DockerHub image documentation it seems that you can set the Default Heap Size by setting -XX:MaxRAM=...:
RAM limit is supported by Windows Server containers, but currently JVM
cannot detect it. To prevent excessive memory allocations,
-XX:MaxRAM=... option must be specified with the value that is not bigger than a containers RAM limit.
From the oracle docs:
Default Heap Size Unless the initial and maximum heap sizes are specified on the command line, they are calculated based on the amount
of memory on the machine.