Redis consumes max memory on large DEL & HMSET function - ruby-on-rails

Issue: After expected clear and rebuild of specific redis keys, worker dynos doesn't allocate memory (until restart of dyno).
I am experience an issue where my Heroku worker-dynos are hitting 95%-100% max memory usage during a delete and rebuild on about 4000 keys. I have a scheduled rebuild that starts every day at 4:00am. Based on logs I assume the DEL of the keys + the rebuild of keys take about ~1490 seconds.
Jun 29 04:01:41 app app/worker.2: 4 TID-...io8w RedisWorker JID-...cd2a7 INFO: start
Jun 29 04:06:28 app app/worker.1: 4 TID-...mtks RedisWorker JID-...bf170 INFO: start
Jun 29 04:26:32 app app/worker.1: 4 TID-...mtks RedisWorker JID-...bf170 INFO: done: 1203.71 sec
Jun 29 04:26:33 app app/worker.2: 4 TID-...io8w RedisWorker JID-...cd2a7 INFO: done: 1490.938 sec
The memory will hover max usage until the dyno restarts (which is scheduled) or we deploy. Example image: Heroku Memory Usage
This is a high level what gets triggered at 4am:
def full_clear
RedisWorker.delete_keys("key1*")
RedisWorker.delete_keys("key2*")
RedisWorker.delete_keys("key3*")
self.build
return true
end
def build
... rebuilds keys based on models ...
return true
end
def self.delete_keys(regex)
$redis.scan_each(match: regex) do |key|
$redis.del(key)
end
end
What I have researched so far or my thoughts:
After redis DEL is being invoked the memory doesn't allocate?
Could there be a better implementation of finding all keys that match and doing a batch delete?
I am using defaults for puma; would configuring puma+sidekiq to better match our resources help be the best starting action? Deploying Rails Applications with the Puma Web Server. After a restart the memory is only about 30%-40% until the next full-rebuild (even during high usage of hmsets).
I noticed that my ObjectSpace counts is comparably a lot lower after the dyno gets restart/rest of day until next scheduled full_rebuild.
Any thoughts how I can go about trying to figure out whats causing the dynos to hang memory? Seems isloated to side / the worker dynos being used do rebuild Redis.

Solution:
I installed New Relic to see if there was a potential memory bloat. One of our most used function calls was doing a N+1 query. Fixed the N+1 query and watched our 60k call in New Relic drop to ~5K.
GC also wasn't collecting because it wasn't hitting our threshold. Later on there might be potential GC optimization -- but for now we our immediate issue has been resolved.
I also reached out to Heroku for their thoughts and this is what was discussed:
Memory usage on the Dyno will be managed by the Ruby VM, and is likely that you're keep ing too much information in memory during the key rebuild. You should look into freeing memory used to generate the key-values after the data has been added redis.
Spending time fixing your N+1 queries will definitely help!

Related

Restart heroku dynos when their RAM exceeded

I have a memory leak problem with my server (who is written in ruby on rails)
I want to implement a temporary solution that restarts the dynos automatically when their memory is exceeding. What is the best way to do this? And is it risky ?
There is a great solution for it if you're using Puma as a server.
https://github.com/schneems/puma_worker_killer
You can restart your server when the RAM exceeds some threshold:
for example:
PumaWorkerKiller.config do |config|
config.ram = 1024 # mb
config.frequency = 5 # seconds
config.percent_usage = 0.98
config.rolling_restart_frequency = 12 * 3600 # 12 hours in seconds
end
PumaWorkerKiller.start
Also, to prevent data corruption and other funny issues in your DB, I would also suggest to make sure you are covered with atomic transactions.

sidekiq-pro batches don't appear to release redis memory after batches complete

We are using sidekiq pro 1.7.3 and sidekiq 3.1.4, Ruby 2.0, Rails 4.0.5 on heroku with the redis green addon with 1.75G of memory.
We run a lot of sidekiq batch jobs, probably around 2 million jobs a day. What we've noticed is that the redis memory steadily increases over the course of a week. I would have expected that when the queues are empty and no workers are busy that redis would have low memory usage, but it appears to stay high. I'm forced to do a flushdb pretty much every week or so because we approach our redis memory limit.
I've had a series of correspondence with Redisgreen and they suggested I reach out to the sidekiq community. Here are some stats from redisgreen:
Here's a quick summary of RAM use across your database:
The vast majority of keys in your database are simple values taking up 2 bytes each.
200MB is being consumed by "queue:low", the contents of your low-priority sidekiq queue.
The next largest key is "dead", which occupies about 14MB.
And:
We just ran an analysis of your database - here is a summary of what we found in 23129 keys:
18448 strings with 1048468 bytes (79.76% of keys, avg size 56.83)
6 lists with 41642 items (00.03% of keys, avg size 6940.33)
4660 sets with 3325721 members (20.15% of keys, avg size 713.67)
8 hashs with 58 fields (00.03% of keys, avg size 7.25)
7 zsets with 1459 members (00.03% of keys, avg size 208.43)
It appears that you have quite a lot of memory occupied by sets. For example - each of these sets have more than 10,000 members and occupies nearly 300KB:
b-3819647d4385b54b-jids
b-3b68a011a2bc55bf-jids
b-5eaa0cd3a4e13d99-jids
b-78604305f73e44ba-jids
b-e823c15161b02bde-jids
These look like Sidekiq Pro "batches". It seems like some of your batches are getting filled up with very large numbers of jobs, which is causing the additional memory usage that we've been seeing.
Let me know if that sounds like it might be the issue.
Don't be afraid to open a Sidekiq issue or email prosupport # sidekiq.org directly.
Sidekiq Pro Batches have a default expiration of 3 days. If you set the Batch's expires_in setting longer, the data will sit in Redis longer. Unlike jobs, batches do not disappear from Redis once they are complete. They need to expire over time. This means you need enough memory in Redis to hold N days of Batches, usually not a problem for most people, but if you have a busy Sidekiq installation and are creating lots of batches, you might notice elevated memory usage.

Neo4j In memory configurations, multithreading, and slow writes

How do I improve performance when writing to neo4j. I currently have neo4j set up on a server and I am currently running it in embedded more. I believe my configurations are storing all the content of my graph database in memory based upon configurations I've found online
neostore.nodestore.db.mapped_memory=0
neostore.relationship.db.mapped_memory=0
neostore.propertystore.db.mapped_memory=0
neostore.propertystore.db.strings.mapped_memory=0
neostore.propertystore.db.arrays.mapped_memory=0
neostore.propertystore.db.index.keys.mapped_memory=0
neostore.propertystore.db.index.mapped_memory=0
node_auto_indexing=true
node_keys_indexable=type,id
cache_type=strong
use_memory_mapped_buffers=false
node_cache_size=12G
relationship_cache_size=12G
node_cache_array_fraction=10
relationship_cache_array_fraction=10
Please let me know if this is incorrect. The problem that I am encountering is that when I try to persist information to the graph database. It appears that those times are not very quick in comparison to our MYSQL times of the samething(ex. to add 250 items would take about 3sec and in MYSQL it takes 1sec) . I read online that when you have multiple indexes that that can slow down performance on persisting data so I am working on that right now to see if that is my culprit. But, I just wanted to make sure that my configurations seem to be inline when it comes to running your graph database in memory.
Second question to this topic. Okay, if my configurations are good and my database is indeed in memory, then is there a way to optimize persisting data just in case this isn't the silver bullet. If we ran one thread against our test that executes this functionality, oppose to 10 threads, its seems like the times for execution bubbles up
ex.( thread 1 finishes 1s, thread 2 finishes 2s, thread 3 finishes 3s,etc). Is there some special multithreaded configuration that I am missing to improve the performance when mulitple threads are hitting it at one time.
Neo4J version
1.9.1-enterprise
My Jvm configs are
-Xms25G -Xmx25G -XX:+UseNUMA -XX:+UseSerialGC
My Machine Specs:
File system type ext3
You cache arguments are invalid.
node_cache_size=12G
relationship_cache_size=12G
node_cache_array_fraction=10
relationship_cache_array_fraction=10
These can only be used with the GCR cache. Setting the cache isn't going to put everything in memory for you at start up, you will have to write code to do this for you. Something like this:
GlobalGraphOperations ggo = GlobalGraphOperations.at(graphDatabaseFactory);
for (Node n : ggo.getAllNodes()) {
for (String propertyKey : n.getPropertyKeys()) {
n.getProperty(propertyKey);
}
for (Relationship relationship : n.getRelationships()) {
}
}
Beware with the strong cache, if you have a lot of nodes/relationships, eventually your cache will become large and performing GC against it will cause long pauses in your system.
My recommendation would be to use the memory mapped files, as this is an OS handled and will be outside of heap space. It doesn't provide near the speed of caching, but it will provide a speed up if you have to read from the neo store.

Cache (large and static) data with class variables

First, let me explain the situation, I've got following:
A "Node" Class with following attributes:
node_id (unique)
node_name (unique)
And a "NodeConnection" Class with following attributes:
node_from
node_to
We'll have around 1 to 3 million nodes and something around 3 to 10 million NodeConnections.
After the nodes and connections are imported once, they won't change.
On each request to the Rails-Application, we'll have to look up around 10 to 100 node_ids by possible node_names. And we have to lookup a few hundred to a few thousands node_connections.
We currently prototyped this without any caching (so, a LOT of database-queries) and response times were horrible (like 2 Minutes).
So we switched over to cache the nodes and connections via memcached.
Got a performance boost, but still lacking of performance. (Because we're calling Cache.read for every NodeConnection, that's a few thousand calls per request)
Now, we tried caching via Classvariable and got a huge performance boost. (Response times within a few hundred ms)
# Pseudocode below
class Node
def nodes
##nodes ||= get_nodes
end
def node_connections
##node_connections ||= get_node_connections
end
end
So, I'd like to ask about Pros and Cons of this solution.
Cons I've got yet:
Every Rails instance has to build up its own cache (it's own ClassVariables) -> higher total memory usage
Initializing the cache is time consuming (1-3 minutes), so we can't do this within a request
Any other solutions out there to cache large (>100MB) and static (data won't change during applications lifetime) data efficiently, so all rails instances within the same machine can access this cache very fast (!)?
It sounds like a very specific situation, but in order to avoid the need for a per-process in-memory cache (i.e. your class variables) to naturally warm up, I'd be investigating the feasibility of scripting the warm-up process and running it from inside an initializer... your app may take longer to start up, but your users would not have to wait.
EDIT | Note that if you were using something like Unicorn, which supports pre-loading application code before forking worker processes, you could minimize the impact of such initialization.

rails high memory usage

I am planning on using delayed job to run some background analytics. In my initial test I saw tremendous amount of memory usage, so I basically created a very simple task that runs every 2 minutes just to observe how much memory is is being used.
The task is very simple and the analytics_eligbile? method always return false, given where the data is now, so basically none of the heavy hitting code is being called. I have around 200 Posts in my sample data in development. Post has_one analytics_facet.
Regardless of the internal logic/business here, the only thing this task is doing is calling the analytics_eligible? method 200 times every 2 minutes. In a matter of 4 hours my physical memory usage is at 110MB and Virtual memory at 200MB. Just for doing something this simple! I can't even begin to imagine how much memory this will eat if its doing real analytics on 10,000 Posts with real production data!! Granted it may not run evevery 2 minutes, more like every 30, still I don't think it will fly.
This is running ruby 1.9.7, rails 2.3.5 on Ubuntu 10.x 64 bit. My laptop has 4GB memory, dual core CPU.
Is rails really this bad or am I doing something wrong?
Delayed::Worker.logger.info('RAM USAGE Job Start: ' + `pmap #{Process.pid} | tail -1`[10,40].strip)
Post.not_expired.each do |p|
if p.analytics_eligible?
#this method is never called
Post.find_for_analytics_update(p.id).update_analytics
end
end
Delayed::Worker.logger.info('RAM USAGE Job End: ' + `pmap #{Process.pid} | tail -1`[10,40].strip)
Delayed::Job.enqueue PeriodicAnalyticsJob.new(), 0, 2.minutes.from_now
Post Model
def analytics_eligible?
vf = self.analytics_facet
if self.total_ratings > 0 && vf.nil?
return true
elsif !vf.nil? && vf.last_update_tv > 0
ratio = self.total_ratings / vf.last_update_tv
if (ratio - 1) >= Constants::FACET_UPDATE_ELIGIBILITY_DELTA
return true
end
end
return false
end
ActiveRecord is fairly memory-hungry - be very careful when doing selects, and be mindful that Ruby automatically returns the last statement in a block as the return value, potentially meaning that you're passing back an array of records that get saved as a result somewhere and thus aren't eligible for GC.
Additionally, when you call "Post.not_expired.each", you're loading all your not_expired posts into RAM. A better solution is find_in_batches, which specifically only loads X records into RAM at a time.
Fixing it could be something as simple as:
def do_analytics
Post.not_expired.find_in_batches(:batch_size => 100) do |batch|
batch.each do |post|
if post.analytics_eligible?
#this method is never called
Post.find_for_analytics_update(post.id).update_analytics
end
end
end
GC.start
end
do_analytics
A few things are happening here. First, the whole thing is scoped in a function to prevent variable collisions from holding onto references from the block iterators. Next, find_in_batches retrieves batch_size objects from the DB at a time, and as long as you aren't building references to them, become eligible for garbage collection after each iteration runs, which will keep total memory usage down. Finally, we call GC.start at the end of the method; this forces the GC to start a sweep (which you wouldn't want to do in a realtime app, but since this is a background job, it's okay if it takes an extra 300ms to run). It also has the very distinct benefit if returning nil, which means that the result of the method is nil, which means we can't accidentally hang on to AR instances returned from the finder.
Using something like this should ensure that you don't end up with leaked AR objects, and should vastly improve both performance and memory usage. You'll want to make sure you aren't leaking elsewhere in your app (class variables, globals, and class references are the worst offenders), but I suspect that this'll solve your problem.
All that said, this is a cron problem (periodic recurring work), rather than a DJ problem, in my opinion. You can have a one-shot analytics parser that runs your analytics every X minutes with script/runner, invoked by cron, which very neatly cleans up any potential memory leaks or misuses per-run (since the whole process terminates at the end)
Loading data in batches and using the garbage collector aggressively as Chris Heald has suggested is going to give you some really big gains, but another area people often overlook is what frameworks they're loading in.
Loading a default Rails stack will give you ActionController, ActionMailer, ActiveRecord and ActiveResource all together. If you're building a web application you may not be using all of these, but you're probably using most.
When you're building a background job, you can avoid loading things you don't need by creating a custom environment for that:
# config/environments/production_bg.rb
config.frameworks -= [ :action_controller, :active_resource, :action_mailer ]
# (Also include config directives from production.rb that apply)
Each of these frameworks will just be sitting around waiting for an email that will never be sent, or a controller that will never be called. There's simply no point in loading them. Adjust your database.yml file, set your background job to run in the production_bg environment, and you'll have a much cleaner slate to start with.
Another thing you can do is use ActiveRecord directly without loading Rails at all. This might be all that you need for this particular operation. I've also found using a light-weight ORM like Sequel makes your background job very light-weight if you're doing mostly SQL calls to reorganize records or delete old data. If you need access to your models and their methods you will need to use ActiveRecord, though. Sometimes it's worth re-implementing simple logic in pure SQL for reasons of performance and efficiency, though.
When measuring memory usage, the only number to be concerned with is "real" memory. The virtual amount contains shared libraries and the cost of these is spread amongst every process using them even though it is counted in full for each one.
In the end, if running something important takes 100MB of memory but you can get it down to 10MB with three weeks of work, I don't see why you'd bother. 90MB of memory costs at most about $60/year on a managed provider which is usually far less expensive than your time.
Ruby on Rails embraces the philosophy of being more concerned with your productivity and your time than about memory usage. If you want to trim it back, put it on a diet, you can do it but it will take a bit of effort.
If you are experiencing memory issues, one solution is to use another background processing tech, like resque. It is the BG processing used by github.
Thanks to Resque's parent / child
architecture, jobs that use too much
memory release that memory upon
completion. No unwanted growth
How?
On certain platforms, when a Resque
worker reserves a job it immediately
forks a child process. The child
processes the job then exits. When the
child has exited successfully, the
worker reserves another job and
repeats the process.
You can find more technical details in README.
It is a fact that Ruby consumes (and leaks) memory. I don't know if you can do much about it, but at least I recommend that you take a look on Ruby Enterprise Edition.
REE is an open source port which promises "33% less memory" among all the other good things. I have used REE with Passenger in production for almost two years now and I'm very pleased.

Resources