Why is using host memory recommended by docker - docker

I am investigating a topic, which I will call “Docker swarm and memory management”.
It states in this article here that docker does not recommend using swap memory, but I can’t find (googling) a place where disadvantages of using swap memory in docker context is explained.
Can a kind soul enlighten me? :-)

It is normal to disable SWAP memory in ALL applications or services that are used in production.
SWAP memory is based on using the hard disk as a substitute when the RAM is full. This may seem beneficial but the RAM has a speed from 2.1 GB/s the oldest to 25.6 GB/s the newest. Contrary to the speed of a hard drive with HDDs on average at 135MB/s, newer M.2 SSDs at 1.2GB/s.
As we can see we would be greatly slowing down the service if we were using SWAP.

Related

Memory usage of keycloak docker container

When we starts the keycloak container, it uses almost 700 MB memory right away. I was not able to find more details on how and where it is using this much memory. I have couple of questions below.
Is there a way to find more details about which processes are taking
more memory inside the container? I was looking into the file
/sys/fs/cgroup/memory/memory.stat inside the container which didn't give much info.
Is it normal for the keycloak container to use this much memory? Or we need
to do any tweaking in the configuration file for better performance.
I would also appreciate if anyone has more findings which can be leverage to improve overall performance of the application.
Keycloak is Java app, so you need to understand Java/Java VM memory footprint first: What is the memory footprint of the JVM and how can I minimize it?
If you want to analyze Java memory usage, then Java VisualVM is a good starting point.
700MB for Keycloak memory is normal. There is initiative to move Keycloak to Quarkus (https://www.keycloak.org/2020/12/first-keycloak-x-release.adoc), which will reduce also memory footprint - it is still in the preview, not generally available.
In theory you can switch to different runtime (e.g. GraalVM), but then you may have different issues - it isn't officialy supported setup.
IMHO: it'll be overengineering if you want to optimize your Keycloak memory usage; it is a Java app

How much resources to allocate to docker

I have been playing around with docker for a few months now and we are now ready to run a few production containers, and it got me into researching the infrastructure.
It let me to the question of, how much resources do I need to allocate to docker and how much should be left for the OS.
e.g. My server is 8 core 16gb ram. How much of that should I allocate to docker? I want to obviously allocate the maximum possible, but at what point would there be degradation of performance of the server it self?
Your question is hard to answer, and here's why: "docker" itself doesn't really require much in the way of resources. On the other hand, the applications that you run using docker will have their own requirements.
For example, if you're hosting a multi-terabyte database in a docker container, you're going to require more memory (and probably a lot more storage) than you would for, say, a single wordpress site.
If you're hosting some sort of video transcoding pipeline in Docker, you might end up consuming a lot more of your available CPU.
The only resource that Docker really consumes on its own is the storage space for images and volumes...and again, how much space you need is entirely dependent on how you're using Docker.
It all depends on exactly what you plan on doing with your system.

Redis RSS 2.7GB and increasing. Used memory is only 40MB. why?

Redis version is 3.2. Used memory is showing as around 43MB, while the used RSS is about 2.7G and increasing. Not able to understand why this is so.
Amount of keys are also not that much:
# Keyspace
db0:keys=4613,expires=62,avg_ttl=368943811
INFO memory
# Memory
used_memory:45837920
used_memory_human:43.71M
used_memory_rss:2903416832
used_memory_rss_human:2.70G
used_memory_peak:2831823048
used_memory_peak_human:2.64G
total_system_memory:3887792128
total_system_memory_human:3.62G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:63.34
mem_allocator:jemalloc-3.6.0
free -h
total used free shared buffers cached
Mem: 3.6G 3.2G 429M 152K 125M 92M
-/+ buffers/cache: 3.0G 647M
Swap: 0B 0B 0B
Restarting the process is not an option on the live production system. Need a way to resolve this memory usage.
Even though the current usage is only 43M, at some point the usage was much higher:
used_memory_peak:2831823048
used_memory_peak_human:2.64G
so it isn’t terribly surprising that your RSS footprint is so high. It’s possible that, even though Redis isn’t using the memory anymore, the allocator just hasn’t released the memory back to the OS yet. Reds v4 has a MEMORY PURGE command to tell the allocator to release the memory it isn’t using, but unfortunately that isn’t available to you on v3.2.
It’s also possible you have an issue with fragmentation. If the memory you’re still using is fragmented across many of the pages that were part of the large allocation, then you are actually using all of those pages. There is an experimental memory defragmenter in v4, but again, that doesn’t really help you.
You said that restarting the server wasn’t an option, but if that is only because you can’t suffer any downtime, you could consider bringing up a slave node, replicate, and promote it to the master node. This would fix both the fragmentation and unreleased memory issues.
But another question is whether the large RSS footprint is a problem for you. It could be slowing Redis down a bit, but have you determined that this is a problem in your system?

Digital Ocean server memory usage above 50%

I am deploying a Flask-based website on the server of Digital Ocean. And the website deployed is mainly static pages, config files and jsons.
This morning I found the memory usage has exceeded 51%. Here is the snapshot.
My memory is 512MB. Would someone please instruct me how to lower the memory usage? Thanks so much!
Update: I've use the "top" command in shell as suggested. Here is the snapshot, does it mean that it is the server itself eaten up those memories?
The memory issue is not related to my application.
I just received the answer from Digital Ocean. Here it is:
Hi there!
Thank you for contacting us! We can help with any memory issues you're having!
Since the Droplet is set up with only 512MB of RAM, once the system and any installed services start, it doesn't take much to push it past 50%. As a result, I don't think what you're seeing is necessarily abnormal under the circumstances. This leaves a few options: the Droplet can be resized and made larger to provide more memory (see https://www.digitalocean.com/community/tutorials/how-to-resize-your-droplets-on-digitalocean), you can add swap space to use part of the Droplet's file system as RAM (see https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04), or you can review the applications and services running on the Droplet and attempt to optimize them to reduce memory use.
We hope this is helpful! Please let us know if there is anything else we can do!
Regards,
I am assuming your are running a Linux server. If so, you can use the top command. It shows you all of the running processes and the system resources they are using. You would then be able to optimize from there.
I found out the cause! Linux borrows unused memory for disk caching. This makes it look like you are low on memory, but you are not! Everything is fine! If your application, or any other process needs more memory, Linux will automatically clear the cache and give memory for your application. Linux does this to speed up the system for you.
If, however, you find yourself needing to clear some RAM quickly to workaround another issue, like a VM misbehaving, you can force Linux to nondestructively drop caches using:
echo 3 | sudo tee /proc/sys/vm/drop_caches

How much free memory does Redis need to run?

I'm pretty sure at this stage that Redis needs a certain amount of free memory on the OS in order to run. In the past few weeks, I've seen Redis (Linux) run out of memory with a couple of gigabytes of RAM still free, and on Windows, it refuses to start when you are using a lot of memory on the system but still have a bunch left free, as in the screenshot below.
The error on Windows gives a hint as to why this is happening (although I'm not assuming it's the same on Linux). However, my question is more generic. How much free memory does Redis need in order to operate?
Redis requires RAM between x2 to x3 the size of your data. The maxheap flag is Windows-specific.
According to Redis FAQ, without a specific Linux configuration, it might need 2x the memory of your dataset. From the document:
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
With this configuration, the forked process (responsible for saving the dataset to disk) will be able to share memory pages more easily with the original process, so it won't need that much memory.
You can read more about this here: https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram

Resources