Memory usage of keycloak docker container - docker

When we starts the keycloak container, it uses almost 700 MB memory right away. I was not able to find more details on how and where it is using this much memory. I have couple of questions below.
Is there a way to find more details about which processes are taking
more memory inside the container? I was looking into the file
/sys/fs/cgroup/memory/memory.stat inside the container which didn't give much info.
Is it normal for the keycloak container to use this much memory? Or we need
to do any tweaking in the configuration file for better performance.
I would also appreciate if anyone has more findings which can be leverage to improve overall performance of the application.

Keycloak is Java app, so you need to understand Java/Java VM memory footprint first: What is the memory footprint of the JVM and how can I minimize it?
If you want to analyze Java memory usage, then Java VisualVM is a good starting point.
700MB for Keycloak memory is normal. There is initiative to move Keycloak to Quarkus (https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc), which will reduce also memory footprint - it is still in the preview, not generally available.
In theory you can switch to different runtime (e.g. GraalVM), but then you may have different issues - it isn't officialy supported setup.
IMHO: it'll be overengineering if you want to optimize your Keycloak memory usage; it is a Java app

Related

Why is using host memory recommended by docker

I am investigating a topic, which I will call “Docker swarm and memory management”.
It states in this article here that docker does not recommend using swap memory, but I can’t find (googling) a place where disadvantages of using swap memory in docker context is explained.
Can a kind soul enlighten me? :-)
It is normal to disable SWAP memory in ALL applications or services that are used in production.
SWAP memory is based on using the hard disk as a substitute when the RAM is full. This may seem beneficial but the RAM has a speed from 2.1 GB/s the oldest to 25.6 GB/s the newest. Contrary to the speed of a hard drive with HDDs on average at 135MB/s, newer M.2 SSDs at 1.2GB/s.
As we can see we would be greatly slowing down the service if we were using SWAP.

How to calculate the docker limits

I create my docker (python flask).
How can I calculate what is the limit to put for memory and CPU?
Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?
With an application already running inside of a container, you can use docker stats to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.
This is more about the consumption of your specific Flask application, you can probably take use the resource module in Python to calculate them.
More information here and here.

How much resources to allocate to docker

I have been playing around with docker for a few months now and we are now ready to run a few production containers, and it got me into researching the infrastructure.
It let me to the question of, how much resources do I need to allocate to docker and how much should be left for the OS.
e.g. My server is 8 core 16gb ram. How much of that should I allocate to docker? I want to obviously allocate the maximum possible, but at what point would there be degradation of performance of the server it self?
Your question is hard to answer, and here's why: "docker" itself doesn't really require much in the way of resources. On the other hand, the applications that you run using docker will have their own requirements.
For example, if you're hosting a multi-terabyte database in a docker container, you're going to require more memory (and probably a lot more storage) than you would for, say, a single wordpress site.
If you're hosting some sort of video transcoding pipeline in Docker, you might end up consuming a lot more of your available CPU.
The only resource that Docker really consumes on its own is the storage space for images and volumes...and again, how much space you need is entirely dependent on how you're using Docker.
It all depends on exactly what you plan on doing with your system.

Digital Ocean server memory usage above 50%

I am deploying a Flask-based website on the server of Digital Ocean. And the website deployed is mainly static pages, config files and jsons.
This morning I found the memory usage has exceeded 51%. Here is the snapshot.
My memory is 512MB. Would someone please instruct me how to lower the memory usage? Thanks so much!
Update: I've use the "top" command in shell as suggested. Here is the snapshot, does it mean that it is the server itself eaten up those memories?
The memory issue is not related to my application.
I just received the answer from Digital Ocean. Here it is:
Hi there!
Thank you for contacting us! We can help with any memory issues you're having!
Since the Droplet is set up with only 512MB of RAM, once the system and any installed services start, it doesn't take much to push it past 50%. As a result, I don't think what you're seeing is necessarily abnormal under the circumstances. This leaves a few options: the Droplet can be resized and made larger to provide more memory (see https://www.digitalocean.com/community/tutorials/how-to-resize-your-droplets-on-digitalocean), you can add swap space to use part of the Droplet's file system as RAM (see https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04), or you can review the applications and services running on the Droplet and attempt to optimize them to reduce memory use.
We hope this is helpful! Please let us know if there is anything else we can do!
Regards,
I am assuming your are running a Linux server. If so, you can use the top command. It shows you all of the running processes and the system resources they are using. You would then be able to optimize from there.
I found out the cause! Linux borrows unused memory for disk caching. This makes it look like you are low on memory, but you are not! Everything is fine! If your application, or any other process needs more memory, Linux will automatically clear the cache and give memory for your application. Linux does this to speed up the system for you.
If, however, you find yourself needing to clear some RAM quickly to workaround another issue, like a VM misbehaving, you can force Linux to nondestructively drop caches using:
echo 3 | sudo tee /proc/sys/vm/drop_caches

How much free memory does Redis need to run?

I'm pretty sure at this stage that Redis needs a certain amount of free memory on the OS in order to run. In the past few weeks, I've seen Redis (Linux) run out of memory with a couple of gigabytes of RAM still free, and on Windows, it refuses to start when you are using a lot of memory on the system but still have a bunch left free, as in the screenshot below.
The error on Windows gives a hint as to why this is happening (although I'm not assuming it's the same on Linux). However, my question is more generic. How much free memory does Redis need in order to operate?
Redis requires RAM between x2 to x3 the size of your data. The maxheap flag is Windows-specific.
According to Redis FAQ, without a specific Linux configuration, it might need 2x the memory of your dataset. From the document:
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
With this configuration, the forked process (responsible for saving the dataset to disk) will be able to share memory pages more easily with the original process, so it won't need that much memory.
You can read more about this here: https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram

Resources