Memory Leaks in my Ruby on Rails Application - ruby-on-rails

My application calls bunch of API's which returns lots of data which are manipulated inside my controller to give various insights (passed onto my view).
The problem is that I have been having memory leaks in my application for which I currently need to restart my application after few number of requests.
Also, I have been caching all of my api calls to improve the performance of my application. Most of my data is stored in form of hashes when returned by the api and this data is manipulated (sort of duplicated using groupby).
I'm using Ruby 1.9 and Rails 3.2. I need to know how can I remove this memory leak from my application.

You should confirm, that you indeed have a memory leak and not a memory bloat.
You can read about ruby GC here
GC.stat[:heap_live_slot] - this one represents the objects which are not cleared after last GC. If this number steadily increases request by request, then you can be sure, that you have a memory leak.

First you can check a list of Ruby gems that have memory leaks first.
Refer (https://github.com/ASoftCo/leaky-gems)

You can use bundler-leak gem to find memory leaks in your gem dependencies.
https://github.com/rubymem/bundler-leak

Related

How do I figure out why a large chunk of memory is not being garbage collected in Rails?

I'm pretty new to Ruby, Rails, and everything else in that ecosystem. I've joined a team that has a Ruby 3.1.2 / Rails 6.1.7 app backed by a Postgres database.
We have a scenario where, sometimes, memory usage on one of our running instances jumps up significantly and is never relinquished (we've waited days). Until today, we didn't know what was causing it or how to reproduce it.
It turns out that it's caused by an internal tool which was running an unbounded ActiveRecord query -- no limit and no paging. When pointing this tool at a more active customer, it takes many seconds, returns thousands of records, and memory usage increases by tens of MB. No amount of waiting will lead to the memory usage going back down again.
Having discovered that, we recently added paging to that particular tool, and in the ~week since, we have not seen usage increasing in giant chunks anymore. However, there are other scenarios which have similar behavior but with smaller payloads; these cause memory usage to increase gradually over time. We deploy this application often enough that it hasn't been a big deal, but I am looking to gain a better understanding of what's happening and to determine if there's a problem here, because that's not what we should see from a stable application that's free of memory leaks.
My first suspicion was a memoized instance variable on the controller, but a quick check indicates that Rails controllers are discarded as soon as the request finishes processing, so I don't think that's it.
My next suspicion was that ActiveRecord was caching my resultset, but I've done a bunch of research on how this works and my understanding is that any cached queries/relations should be released when the request completes. Even if I have that wrong, a subsequent identical request takes just as long and causes another jump in memory usage, so either that's not it, or caching is broken on our system.
My Google searches turn up lots of results about various caching capabilities in Rails 2, 3, and 5 -- but not much about 6.x, so maybe something significant has changed and I just haven't found it.
I did find ruby-prof, memory-profiler, and get_process_mem -- these all seem like they are only suitable for high-level analysis and wouldn't help me here.
Can I explore the contents of the object graph currently in memory on an existing, live instance of my app? I'm imagining that this would happen in the Rails console, but that's not a constraint on the question. If not, is there some other way that I could find out what is currently in memory, and whether it's just a bunch of fragmented pages or if there's actually something that isn't getting garbage collected?
EDIT
#engineersmnky pointed out in the comments that maybe everything is fine and that perhaps Ruby is just still holding on to the OS page due to some other still-valid object therein. However, if this is the case, it strikes me as unlikely that memory usage would not go back down to the previous baseline after several days of production usage.
Loading tens of MB worth of resultset into memory should result in the allocation of >1000 16kb memory pages in just a handful of seconds. It seems reasonable to assume that the vast majority of those would contain exclusively this resultset, and could therefore be released as soon as the resultset is garbage collected.
Furthermore, I can reproduce the increased memory usage by running the same unbounded ActiveRecord query in the Rails console, and when I close that console, the memory goes down almost immediately -- exactly what I was expecting to see when the web request completes. I don't fully understand how the Rails console works when connecting to a running application, though, so this may not be relevant.

DelayedJob doesn't release memory

I'm using Puma server and DelayedJob.
It seems that the memory taken by each job isn't released and I slowly get a bloat causing me to restart my dyno (Heroku).
Any reason why the dyno won't return to the same memory usage figure before the job was performed?
Any way to force releasing it? I tried calling GC but it doesn't seem to help.
You can have one of the following problems. Or actually all of them:
Number 1. This is not an actual problem, but a misconception about how Ruby releases memory to operating system. Short answer: it doesn't. Long answer: Ruby manages an internal list of free objects. Whenever your program needs to allocate new objects, it will get those objects from this free list. If there are no more objects there, Ruby will allocate new memory from operating system. When objects are garbage collected they go back to the free list. So Ruby still have the allocated memory. To illustrate it better, imagine that your program is normally using 100 MB. When at some point program will allocate 1 GB, it will hold this memory until you restart it.
There are some good resource to learn more about it here and here.
What you should do is to increase your dyno size and monitor your memory usage over time. It should stabilize at some level. This will show you your normal memory usage.
Number 2. You can have an actual memory leak. It can be in your code or in some gem. Check out this repository, it contains information about well known memory leaks and other memory issues in popular gems. delayed_job is actually listed there.
Number 3. You may have unoptimized code that is using more memory than needed and you should try to investigate memory usage and try to decrease it. If you are processing large files, maybe you should do it in smaller batches etc.

Profile Memory Between Requests in Rails to Find Leaks

I've got a Rails 4.2.1 app that has a memory leak. I'm hosting on Heroku and when in production my memory continues to grow until the server starts paging. I'm trying to source out the leak - and wanted to know if there is a way I can debug the memory allocations still active after a request / response. If I can get that, I can curl my pages a few times to warm any globals then siege to see what memory is leaking. Any way to do this?
The rack-mini-profiler gem allows you to get a count of objects in memory by class (and allocated by the current request). It also dumps some of the most frequent objects such as strings - I've found it very helpful for diagnosing memory leaks.

Rails garbage collect is not removing old objects

Some odd behavior I encountered while optimizing a rails view:
After tweaking the amount of garbage collect calls in a request I didnt see real improvements on the performance. Investigating the problem I found out garbage collect didn't really remove that much dead objects!
I got a very heavy view with LOADS of objects.
Using scrap I found out after a fresh server start and page load the amount of objects was about 670.000, after reloading the page 3 times the amount has risen to 19.000.000!
RAILS_GC_MALLOC_LIMIT is set to 16.000.000 and I can read GC has been called 1400 times.
Why does the memory keep increasing on a refresh of the page? And is there a way to make sure the old objects are removed by GC?
PS: Running on REE 1.8.7 2011.03 with rails 3.2.1
I highly recommended to use newrelic for optimalization, got way more performance boost there then with a little gc tweaking..
You dont need to gc objects you never create :)

How to get memory usage in Rails app.?

For example, I have Update action in Product Controller.
I want to measure how much memory consumption when the Update action being invoked.
thanks.
New Relic's RPM will do what you need -
http://www.bestechvideos.com/2009/03/21/railslab-scaling-rails-episode-4-new-relic-rpm
also take a look at some of the answers here: ruby/ruby on rails memory leak detection
Ruby Memory Validator monitors memory allocation for Ruby applications.
If you want to modify your Ruby runtime to add a memory tracking API to it, take a look at this.

Resources