Check size of all keys in bytes in redis server - memory

How to check memory used by all the keys in redis-server.
I have started redis-server with appendonly yes option. To check size of an individual key I am using following commands-
127.0.0.1:6379> set foo bar
OK
127.0.0.1:6379> memory usage foo
(integer) 50
How do I get memory usage of all the existing keys in redis

You can find this information using the following command:
INFO memory
This will return detailed information about the memory usage and you can find the value:
used_memory_dataset: The size in bytes of the dataset
You also have interesting values using the MEMORY STATS command, including the one describe above dataset.bytes and the total number of keys keys.count.
Also and probably my favorite approach, when I work with Redis I am using Redis Insight to look at all the metrics during development.
Regards
Tug

Related

Is it neccesary to increase VM memory for redis instance

Now I'm using 4CPU 8GB memory Virtual machine in GCP, and I'm also using redisearch docker container.
I have 47.5Millon Hash keys and I estimate it is about 35GB over. So if I import all of my data at redis-cli in VM, It needs really 35GB over memory?
+ I already tried to import 7.5Millon but memory utilization is about 70% full
If your cache need 35Gb, then, your cache will need 35Gb.
The values you gave are coherent. If 47M keys use 35Gb then 7.5M will use about 5.6Gb (which is also 70% of 8Gb).
If you dont want to change your VM properties then you can use the swap property in the redis conf file to use part of the cold storage of the VM.
Note that you have to be careful using swap, depending on the hardware it can be a pretty bad idea. Using anything but NVMes is bad (even SSDs), as you can see here :
Benchmark with SSds
Benchmark with NVMes

Redis Memory Optimization suggestions

I have a Redis Master and 2 slaves. All 3 are currently on the same unix server. The memory used by the 3 instances is approximately 3.5 G , 3 G , 3G. There are about 275000 keys in the redis db. About 4000 are hashes. 1 Set has 100000 values. 1 List has 275000 keys in it. Its a List of Hashes and Sets. The server has total memory of 16 GB. Currently 9.5 GB is used. The persistence is currently off. The rdb file is written once in a day by forced background save. Please provide any suggestions for optimizations. max-ziplist configuration is default currently.
Optimizing Hashes
First, let's look at the hashes. Two important questions - how many elements in each hash, and what is the largest value in those hashes? A hash uses the memory efficient ziplist representation if the following condition is met:
len(hash) < hash-max-ziplist-entries && length-of-largest-field(hash) < hash-max-ziplist-value
You should increase the two settings in redis.conf based on your data, but don't increase it more than 3-4 times the default.
Optimizing Sets
A set with 100000 cannot be optimized, unless you provide additional details on your use case. Some general strategies though -
Maybe use HyperLogLog - Are you using the set to count unique elements? If the only commands you run are sadd and scard - maybe you should switch to a hyperloglog.
Maybe use Bloom Filter - Are you using the set to check for existence of a member? If the only commands you run are sadd and sismember - maybe you should implement a bloom filter and use it instead of the set.
How big is each element? - Set members should be small. If you are storing big objects, you are perhaps doing something incorrect.
Optimizing Lists
A single list with 275000 seems wrong. It is going to be slow to access elements in the center of the list. Are you sure you list is the right data structure for your use case?
Change list-compress-depth to 1 or higher. Read about this setting in redis.conf - there are tradeoffs. But for a list of 275000 elements, you certainly want to enable compression.
Tools
Use the open source redis-rdb-tools to analyze your data set (disclaimer: I am the author of this tool). It will tell you how much memory each key is taking. It will help you to decide where to concentrate your efforts on.
You can also refer to this memory optimization cheat sheet.
What else?
You have provided very little details on your use case. The best savings come from picking the right data structure for your use case. I'd encourage you to update your question with more details on what you are storing within the hash / list / set.
We did following configuration and that helped to reduce the memory footprint by 40%
list-max-ziplist-entries 2048
list-max-ziplist-value 10000
list-compress-depth 1
set-max-intset-entries 2048
hash-max-ziplist-entries 2048
hash-max-ziplist-value 10000
Also, we increased the RAM on the linux server and that helped us with the Redis memory issues.

How to determine the size of a single key in Redis

How can I know the size (in KB) of a particular key in redis?
I'm aware of info memory command, but it gives combined size of Redis instance, not for a single key.
I know this is an old question but, just for the record, Redis implemented a memory usage <key> command since version 4.0.0.
The output is the amount of bytes required to store the key in RAM.
Reference: https://redis.io/commands/memory-usage ‏
You currently (v2.8.23 & v3.0.5) can't.
The serializedlength from DEBUG OBJECT (as suggested by #Kumar) is not indicative of the value's true size in RAM - Redis employs multiple "tricks" to save on RAM on the one hand and on the other hand you also need to account for the data structure's overhead (and perhaps some of Redis' global dictionary as well).
The good news is that there has been talk on the topic in the OSS project and it is likely that in the future memory introspection will be greatly improved.
Note: I started (and stopped for the time being) a series on the topic - here's the 1st part: https://redislabs.com/blog/redis-ram-ramifications-i
DEBUG OBJECT <key> reveals something like the serializedlength of key, which was in fact something I was looking for... For a whole database you need to aggregate all values for KEYS * which shouldn't be too dfficult with a scripting language of your choice... The bad thing is that redis.io doesn't really have a lot of information about DEBUG OBJECT.
Why not try
APPEND {your-key} ""
This will append nothing to the existing value but return the current length.
If you just want to get the length of a key (string): STRLEN

How to evaluate memory footprint of redis key-value in rails app?

I want to store some users' info to redis for each user. Data type is used key-value.
For example:
$redis.set("user_info:12345", #{some data})
Is there any way to evaluate memory footprint?
I think redis key and value all will consume the memory, how to know how much memory consumed?
As of v4, we have the MEMORY USAGE command that does a much better job, reflecting the entire RAM consumption of key names, their values and all associated overheads of the internal data structures.
$redis.memory :usage, key_name
DEBUG OBJECT output is not a reliable way to measure the memory consumption of a key in Redis - the serializedlength field is given in bytes needed for persisting the object, not the actual footprint in memory that includes various administrative overheads on top of the data itself.
To illustrate this, I created a set of 2.5 million integers and compared the output of each:
> r.memory :usage, 'testkey'
=> 132003825
> r.debug :object, 'testkey'
=> "Value at:0x7fe739e09a00 refcount:1 encoding:hashtable serializedlength:12404474 lru:729393 lru_seconds_idle:68"
MEMORY USAGE reports 132,003,825 bytes (126 MiB)
DEBUG OBJECT reports only 12,404,474 bytes (12 MiB) 👎
You can get the serialized length of a key's value with the DEBUG OBJECT command:
$redis.set("hello", "world")
$redis.debug("object", "hello")
# => "Value at:0x7f86f350a8d0 refcount:1 encoding:raw serializedlength:6 lru:2421685 lru_seconds_idle:13"
And if you want to extract that number, you can use this regex: /serializedlength:(\d+)/
size = $redis.debug("object", "hello").match(/serializedlength:(\d+)/)[1].to_i
# => 6
if you need information on global used and available memory on the server, use $redis.info(:memory)

sidekiq-pro batches don't appear to release redis memory after batches complete

We are using sidekiq pro 1.7.3 and sidekiq 3.1.4, Ruby 2.0, Rails 4.0.5 on heroku with the redis green addon with 1.75G of memory.
We run a lot of sidekiq batch jobs, probably around 2 million jobs a day. What we've noticed is that the redis memory steadily increases over the course of a week. I would have expected that when the queues are empty and no workers are busy that redis would have low memory usage, but it appears to stay high. I'm forced to do a flushdb pretty much every week or so because we approach our redis memory limit.
I've had a series of correspondence with Redisgreen and they suggested I reach out to the sidekiq community. Here are some stats from redisgreen:
Here's a quick summary of RAM use across your database:
The vast majority of keys in your database are simple values taking up 2 bytes each.
200MB is being consumed by "queue:low", the contents of your low-priority sidekiq queue.
The next largest key is "dead", which occupies about 14MB.
And:
We just ran an analysis of your database - here is a summary of what we found in 23129 keys:
18448 strings with 1048468 bytes (79.76% of keys, avg size 56.83)
6 lists with 41642 items (00.03% of keys, avg size 6940.33)
4660 sets with 3325721 members (20.15% of keys, avg size 713.67)
8 hashs with 58 fields (00.03% of keys, avg size 7.25)
7 zsets with 1459 members (00.03% of keys, avg size 208.43)
It appears that you have quite a lot of memory occupied by sets. For example - each of these sets have more than 10,000 members and occupies nearly 300KB:
b-3819647d4385b54b-jids
b-3b68a011a2bc55bf-jids
b-5eaa0cd3a4e13d99-jids
b-78604305f73e44ba-jids
b-e823c15161b02bde-jids
These look like Sidekiq Pro "batches". It seems like some of your batches are getting filled up with very large numbers of jobs, which is causing the additional memory usage that we've been seeing.
Let me know if that sounds like it might be the issue.
Don't be afraid to open a Sidekiq issue or email prosupport # sidekiq.org directly.
Sidekiq Pro Batches have a default expiration of 3 days. If you set the Batch's expires_in setting longer, the data will sit in Redis longer. Unlike jobs, batches do not disappear from Redis once they are complete. They need to expire over time. This means you need enough memory in Redis to hold N days of Batches, usually not a problem for most people, but if you have a busy Sidekiq installation and are creating lots of batches, you might notice elevated memory usage.

Resources