Understanding JVM Memory on Containers Environment - memory

I'm running a regular JVM application on containers in GCP.
The container_memory_working_set_bytes metric returns 4GB, while sum(jvm_memory_bytes_used) returns 2GB.
I'm trying to understand which processes are using the remaining 2GB.
In theory, what can consume this memory? how can I understand it via Prometheus / Linux shell by kubectl exec?

According to these docs jvm_memory_bytes_used is only heap memory.
JVM can consume much more than that and this has already been asked and answered several times.
One of the best answers that I know about is here: Java using much more memory than heap size (or size correctly Docker memory limit)

I managed to find the difference, It's was the JVM committed memory which can be found via jvm_memory_bytes_commited

Related

Memory usage of keycloak docker container

When we starts the keycloak container, it uses almost 700 MB memory right away. I was not able to find more details on how and where it is using this much memory. I have couple of questions below.
Is there a way to find more details about which processes are taking
more memory inside the container? I was looking into the file
/sys/fs/cgroup/memory/memory.stat inside the container which didn't give much info.
Is it normal for the keycloak container to use this much memory? Or we need
to do any tweaking in the configuration file for better performance.
I would also appreciate if anyone has more findings which can be leverage to improve overall performance of the application.
Keycloak is Java app, so you need to understand Java/Java VM memory footprint first: What is the memory footprint of the JVM and how can I minimize it?
If you want to analyze Java memory usage, then Java VisualVM is a good starting point.
700MB for Keycloak memory is normal. There is initiative to move Keycloak to Quarkus (https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc), which will reduce also memory footprint - it is still in the preview, not generally available.
In theory you can switch to different runtime (e.g. GraalVM), but then you may have different issues - it isn't officialy supported setup.
IMHO: it'll be overengineering if you want to optimize your Keycloak memory usage; it is a Java app

Why is using host memory recommended by docker

I am investigating a topic, which I will call “Docker swarm and memory management”.
It states in this article here that docker does not recommend using swap memory, but I can’t find (googling) a place where disadvantages of using swap memory in docker context is explained.
Can a kind soul enlighten me? :-)
It is normal to disable SWAP memory in ALL applications or services that are used in production.
SWAP memory is based on using the hard disk as a substitute when the RAM is full. This may seem beneficial but the RAM has a speed from 2.1 GB/s the oldest to 25.6 GB/s the newest. Contrary to the speed of a hard drive with HDDs on average at 135MB/s, newer M.2 SSDs at 1.2GB/s.
As we can see we would be greatly slowing down the service if we were using SWAP.

docker for mac memory usage in com.docker.hyperkit

I'm running docker desktop community 2.1.0.3 on MacOS Mojave. I've got 8GB of memory allocated to Docker, which already seems like a lot (that's half my RAM). Somehow even after exiting and then starting Docker for Mac again, which means no containers are running, docker is already exceeding the memory allocation by 1GB.
What is expected memory usage for docker with no containers running? Is there a memory leak in docker for mac or docker's hyperkit?
As #GabLeRoux has shared in a comment, the "Real Memory" usage is much lower than what you see in the "Memory" column in Activity Monitor.
This document thoroughly explains memory usage on Mac OS with Docker Desktop and information is excerpted from there.
To see the "Real Memory" used by Docker, right-click the column names in Activity Monitor and Select "Real Memory". The value in this column is what's currently physically allocated to com.docker.hyperkit.
Alternate answer: I reduced the number of CPUs and Memory Docker is allowed to use within the Docker Resources preferences. My computer is running faster and quieter now.
I just now put this in place, so time will tell if this solution works for me. Before it was making my computer max out on memory. Now it's significantly reduced.
Thank you for the note on real memory. I added that to my Activity Monitor.
UPDATE: It's been a few days now and my computer runs well below the max of memory and my fan runs at a minimum if at all.
I think you shouldn't be using swap while ram is not full, for ssd health and speed

Excessive memory use pyspark

I have setup a JupyterHub and configured a pyspark kernel for it. When I open a pyspark notebook (under username Jeroen), two processes are added, a Python process and a Java process. The Java process is assigned 12g of virtual memory (see image). When running a test script on a range of 1B number it grows to 22g. Is that something to worry about when we work on this server with multiple users? And if it is, how can I prevent Java from allocating so much memory?
You don't need to worry about virtual memory usage, reserved memory is much more important here (the RES column).
You can control size of JVM heap usage using --driver-memory option passed to spark (if you use pyspark kernel on jupyterhub you can find it in environment under PYSPARK_SUBMIT_ARGS key). This is not exactly the memory limit for your application (there are other memory regions on JVM), but it is very close.
So, when you have multiple users setup, you should learn them to set appropriate driver memory (the minimum they need for processing) and shutdown notebooks after they finish work.

How much free memory does Redis need to run?

I'm pretty sure at this stage that Redis needs a certain amount of free memory on the OS in order to run. In the past few weeks, I've seen Redis (Linux) run out of memory with a couple of gigabytes of RAM still free, and on Windows, it refuses to start when you are using a lot of memory on the system but still have a bunch left free, as in the screenshot below.
The error on Windows gives a hint as to why this is happening (although I'm not assuming it's the same on Linux). However, my question is more generic. How much free memory does Redis need in order to operate?
Redis requires RAM between x2 to x3 the size of your data. The maxheap flag is Windows-specific.
According to Redis FAQ, without a specific Linux configuration, it might need 2x the memory of your dataset. From the document:
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
With this configuration, the forked process (responsible for saving the dataset to disk) will be able to share memory pages more easily with the original process, so it won't need that much memory.
You can read more about this here: https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram

Resources