Redis memory stats. what is the differences between these 3? - memory

When you running redis process in the machine, there are redis-cli. So that we can get some information of the process.
❯ redis-cli
127.0.0.1:6379> info memory
and there are 3 things called.
used_memory_rss and used_memory_peak and maxmemory. As far as I know, used_memory_rss is actual memory that Redis is consuming. Also I know that once Redis take memory, it doesn't release(free) memory to OS.(They are not doing GC) unless you restart the process.
Then how is it possible that used_memory_peak is bigger than used_memory_rss?
and what is maxmemory?

maxmemory is the value of the corresponding configuration directive, it is well-described in INFO command documentation along with the memory optimization tips.
Regarding Redis never releasing memory - what makes you think so? The documentation says somewhat different:
Redis will not always free up (return) memory to the OS when keys are removed... This happens because the underlying allocator can't easily release the memory. For example often most of the removed keys were allocated in the same pages as the other keys that still exist.
"Not always" is not the same as "never" :) Whether it releases memory or not strongly depends on what allocator is being used and what data access patterns you have.
For example, there is MEMORY PURGE command (works only with jemalloc) that can trigger some memory to be released to OS:
127.0.0.1:6379> info memory
# Memory
used_memory:1312328
used_memory_human:1.25M
used_memory_rss:7118848
used_memory_rss_human:6.79M
...
127.0.0.1:6379> memory purge
OK
127.0.0.1:6379> info memory
# Memory
used_memory:1312328
used_memory_human:1.25M
used_memory_rss:6041600
used_memory_rss_human:5.76M
...
(note used_memory_rss slightly reduced - this means it can go below peak usage under certain lucky circumstances)

Related

Redis RSS 2.7GB and increasing. Used memory is only 40MB. why?

Redis version is 3.2. Used memory is showing as around 43MB, while the used RSS is about 2.7G and increasing. Not able to understand why this is so.
Amount of keys are also not that much:
# Keyspace
db0:keys=4613,expires=62,avg_ttl=368943811
INFO memory
# Memory
used_memory:45837920
used_memory_human:43.71M
used_memory_rss:2903416832
used_memory_rss_human:2.70G
used_memory_peak:2831823048
used_memory_peak_human:2.64G
total_system_memory:3887792128
total_system_memory_human:3.62G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
mem_fragmentation_ratio:63.34
mem_allocator:jemalloc-3.6.0
free -h
total used free shared buffers cached
Mem: 3.6G 3.2G 429M 152K 125M 92M
-/+ buffers/cache: 3.0G 647M
Swap: 0B 0B 0B
Restarting the process is not an option on the live production system. Need a way to resolve this memory usage.
Even though the current usage is only 43M, at some point the usage was much higher:
used_memory_peak:2831823048
used_memory_peak_human:2.64G
so it isn’t terribly surprising that your RSS footprint is so high. It’s possible that, even though Redis isn’t using the memory anymore, the allocator just hasn’t released the memory back to the OS yet. Reds v4 has a MEMORY PURGE command to tell the allocator to release the memory it isn’t using, but unfortunately that isn’t available to you on v3.2.
It’s also possible you have an issue with fragmentation. If the memory you’re still using is fragmented across many of the pages that were part of the large allocation, then you are actually using all of those pages. There is an experimental memory defragmenter in v4, but again, that doesn’t really help you.
You said that restarting the server wasn’t an option, but if that is only because you can’t suffer any downtime, you could consider bringing up a slave node, replicate, and promote it to the master node. This would fix both the fragmentation and unreleased memory issues.
But another question is whether the large RSS footprint is a problem for you. It could be slowing Redis down a bit, but have you determined that this is a problem in your system?

How much free memory does Redis need to run?

I'm pretty sure at this stage that Redis needs a certain amount of free memory on the OS in order to run. In the past few weeks, I've seen Redis (Linux) run out of memory with a couple of gigabytes of RAM still free, and on Windows, it refuses to start when you are using a lot of memory on the system but still have a bunch left free, as in the screenshot below.
The error on Windows gives a hint as to why this is happening (although I'm not assuming it's the same on Linux). However, my question is more generic. How much free memory does Redis need in order to operate?
Redis requires RAM between x2 to x3 the size of your data. The maxheap flag is Windows-specific.
According to Redis FAQ, without a specific Linux configuration, it might need 2x the memory of your dataset. From the document:
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
With this configuration, the forked process (responsible for saving the dataset to disk) will be able to share memory pages more easily with the original process, so it won't need that much memory.
You can read more about this here: https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram

Choosing redis maxmemory size and BGSAVE memory usage

I am trying to find out what a safe setting for 'maxmemory' would be in the following situation:
write-heavy application
8GB RAM
let's assume other processes take up about 1GB
this means that the redis process' memory usage may never exceed 7GB
memory usage doubles on every BGSAVE event, because:
In the redis docs the following is said about the memory usage increasing on BGSAVE events:
If you are using Redis in a very write-heavy application, while saving an RDB file on disk or rewriting the AOF log Redis may use up to 2 times the memory normally used.
the maxmemory limit is roughly compared to 'used_memory' from redis-cli INFO (as is explained here) and does not take other memory used by redis into account
Am I correct that this means that the maxmemory setting should, in this situation, be set no higher than (8GB - 1GB) / 2 = 3.5GB?
If so, I will create a pull request for the redis docs to reflect this more clearly.
I would recommend in this case a limit of 3GB. Yes, the docs are pretty much correct and running a bgsave will double for a short term the memory requirements. However, I prefer to reserve 2GB of memory for the system, or at a maximum for a persisting master 40% of maximum memory.
You indicate you have a very write heavy application. In this case I would highly recommend a second server do the save operations. I've found during high writes and a bgsave the response time to the client(s) can get high. It isn't Redis per se causing it, but the response of the server itself. This is especially true for virtual machines. Under this setup you would use the second server to slave from the primary and save to disk while the first remains responsive.

ejabberd: Memory difference between erlang and Linux process

I am running ejabberd 2.1.10 server on Linux (Erlang R14B 03).
I am creating XMPP connections using a tool in batches and sending message randomly.
ejabberd is accepting most of the connections.
Even though connections are increasing continuously,
value of erlang:memory(total) is observed to be with-in a range.
But if I check the memory usage of ejabberd process using top command, I can observe that memory usage by ejabberd process is increasing continuously.
I can see that difference between the values of erlang:memory(total) and the memory usage shown by top command is increasing continuously.
Please let me know the reason for the difference in memory shown.
Is it because of memory leak? Is there anyway I can debug this issue?
What for the additional memory (difference between the erlang & top command) is used if it is not memory leak?
A memory leak in either the Erlang VM itself or in the non-Erlang parts of ejabberd would have the effect you describe.
ejabberd contains some NIFs - there are 10 ".c" files in ejabberd-2.1.10.
Was your ejabberd configured with "--enable-nif"?
If so, try comparing with a version built using "--disable-nif", to see if it has different memory usage behaviour.
Other possibilities for debugging include using Valgrind for detecting and locating the leak. (I haven't tried using it on the Erlang VM; there may be a number of false positives, but with a bit of luck the leak will stand out, either by size or by source.)
A final note: the Erlang process's heap may have been fragmented. The gaps among allocations would count towards the OS process's size; It doesn't look like they are included in erlang:memory(total).

Redis - monitoring memory usage

I am currently testing insertion of keys in a database Redis (on local).
I have more than 5 millions keys and I have just 4GB RAM so at one moment I reach capacity of RAM and swap fill in (and my PC goes down)...
My problematic : How can I make monitoring memory usage on the machine which has the Redis database, and in this way alert no more insert some keys in the Redis database ?
Thanks.
Memory is a critical resource for Redis performance. Used memory defines total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc).
You can collect all memory utilization metrics data for a Redis instance by running “info memory”.
127.0.0.1:6379> info memory
Memory
used_memory:1007280
used_memory_human:983.67K
used_memory_rss:2002944
used_memory_rss_human:1.91M
used_memory_peak:1008128
used_memory_peak_human:984.50K
Sometimes, when Redis is configured with no max memory limit, memory usage will eventually reach system memory, and the server will start throwing “Out of Memory” errors. At other times, Redis is configured with a max memory limit but noeviction policy. This would cause the server not to evict any keys, thus preventing any writes until memory is freed. The solution to such problems would be configuring Redis with max memory and some eviction policy. In this case, the server starts evicting keys using eviction policy as memory usage reaches the max.
Memory RSS (Resident Set Size) is the number of bytes that the operating system has allocated to Redis. If the ratio of ‘memory_rss’ to ‘memory_used’ is greater than ~1.5, then it signifies memory fragmentation. The fragmented memory can be recovered by restarting the server.
Concerning memory usage, I'd advise you to look at the redis.io FAQ and this article about using redis as a LRU cache.
You can either cap the memory usage via the maxmemory configuration setting, in which case once the memory limit is reached all write requests will fail with an error, or you could set the maxmemory-policy to allkeys-lru, for example, to start overwriting the least recently used data on the server with stuff you currently need, etc. For most use cases you have enough flexibility to handle such problems through proper config.
My advice is to keep things simple and manage this issue through configuration of the redis server rather than introducing additional complexity through os-level monitoring or the like.
There is a good Unix utility named vmstat. It is like top but command line, so you can get the memory usage and be prepared before you system is halt. You can also use ps v PID to get this info about specific process. Redis's PID is can be retrieved this way: pidof redis-server

Resources