Rails Active record consuming too much memory - ruby-on-rails

I have been using rails 4.2.8 and ruby 2.6.6
In my application we have huge database(postGresql).
In report section when I am trying to generate reports(much more complex quesries, many models used), I see total allocated memory is huge.
I am using worker to generate report. Here is the main part of the memory profiler gem's generate report:
Total allocated: 335531877 bytes (4301278 objects)
Total retained: 2100410 bytes (21442 objects)
allocated memory by gem
-----------------------------------
263364691 activerecord-4.2.8
As you can see total memory used: 335MB where active record consumed 263MB. It is my local report. In production server consumed memory almost 700MB.
Is it normal scenario? Sometimes I am getting Error R14 (Memory quota exceeded) in production server.
Appreciate your time and help!

Related

Verify that object is shared in memory between Ruby Unicorn processes?

I do FOO = MyBigImmutableObject.new("my.db") in a Ruby on Rails initializer, building a ~60 MB object.
My assumption is that this happens before the Unicorn app server forks, and that this means it only uses 60 MB once, not once per worker process.
How can I verify this, though? I've tried looking at the memory usage for Unicorn using ps, but that shows a 60 MB increase for every worker process when I introduce this constant. So I'm thinking maybe it doesn't distinguish shared memory in a way that helps me with this.
Checking total used/free memory on the system seems tricky since I would have to isolate everything else.
What are some good ways for me to verify that it's shared in memory?

ruby requests more memory when there are plenty free heap slots

We have a server running
Sidekiq 4.2.9
rails 4.2.8
MRI 2.1.9
This server periodically produce some amount of importing from external API's, perform some calculations on them and save these values to the database.
About 3 weeks ago server started hanging, as I see from NewRelic (and when ssh'ed to it) - it consumes more and more memory over time, eventually occupying all available RAM, then server hangs.
I've read some articles about how ruby GC works, but still can't understand, why at ~5:30 AM heap size jumps from ~2.3M to 3M , when there's still 1M free heap slots available(GC settings are default)
similar behavior, 3:35PM:
So, the questions are:
how to make Ruby fill free heap slots instead of requesting new slots from OS ?
how to make it release free heap slots to the system ?
how to make Ruby fill free heap slots instead of requesting new slots from OS ?
Your graph does not have "full" fidelity. It is a lot to assume that GC.stat was called by Newrelic or whatnot just at the exact right time.
It is incredibly likely that you ran out of slots, heap grew and since heaps don't shrink in Ruby you are stuck with a somewhat bloated heap.
To alleviate some of the pain you can limit RUBY_GC_HEAP_GROWTH_MAX_SLOTS to a sane number, something like 100,000 will do, I am trying to lobby setting a default here in core.
Also
Create a persistent log of jobs that run and time they ran (duration and so on), gather GC.stat before and after job runs
Split up your jobs by queue, run 1 queue on one server and other queue on another one, see which queue and which job is responsible for the problem
Profile various jobs you have using flamegraph or other profiling tools
Reduce the amount of concurrent jobs you run as an experiment, or place a mutex between certain job types. It is possible that 1 "job a" at a time is OKish, and 20 concurrent "job a"s at a time will bloat memory.

Memory leak in Ruby on Rails app as garbage collector activity spikes

Framework: Rails 5.0.0.1
Platform: Heroku
Server: Puma, 30 processes with 10 workers each
We're seeing an increase in memory once an hour, coinciding with the Ruby garbage collector as can be seen in the screenshot linked to below. The number of requests per time unit was almost constant throughout the memory increase (~1300rpm).
Memory seems stable except for the garbage collector runs, usually fluctuating by a few megabytes in either direction around a fairly stable average. Debugging the app locally using profiling tools such as memory profiler or dumping the heap space using Objectspace allocation tracing didn't conclusively identify any memory leaks.
Question:
How to find out if it has something to do with the garbage collector not working properly?

Rails application servers

I've been reading information about how different rails application servers work for a while and some things got me confused probably because of my lack of knowledge in this field. Anyway, the following things got me confused:
Puma server has the following line about its clustered mode workers number in its readme:
On a ruby implementation that offers native threads, you should tune this number to match the number of cores available
So if I have, lets say, 2 cores and use rubinius as a ruby implementation, should I still use more than 1 process considering that rubinius use native threads and doesn't have the lock thus it uses all the CPU cores anyway, even with 1 process?
I understand it that I'd need to only increase the threads pool of the only process if I upgrade to a machine with more cores and memory, if it's not correct, please explain it to me.
I've read some articles on using Server-Sent Events with puma which, as far as I understand, blocks the puma thread since the browser keeps the connection open, so if I have 16 threads and 16 people are using my site, then the 17th would have to wait before one of those 16 leaves so it could connect? That's not very efficient, is it? Or what do I miss?
If I have a 1 core machine with 3Gb of RAM, just for the sake of the question, and using unicorn as my application server and 1 worker takes 300 MBs of memory and its CPU usage is insignificant, how many workers should I have? Some say that the number of workers should be equal to the number of cores, but if I set the workers number to, lets say, 7 (since I have enough RAM for it), it will be able to handle 7 concurrent requests, won't it? So it's just a question of memory and cpu usage and amount of RAM? Or what do I miss?

Rails: Server Monitoring - Ruby Running 17 processes?

I am monitoring my server on New Relic and the memory consumption of my app is rather high about 1 GB. Currently I am the only visitor to the site. When I drill down, I see that most of the consumption is because of Ruby. It says 17 instances running. What does this mean and how can I lower it?
Unicorn is configured to run X number of instances by default. You can explicitly configure this number in config/unicorn.rb using worker_processes 4 (to run 4 instances). Each unicorn instance will load up the entire stack for your application and keep it memory. A mid-sized rails applications tends to be around ~100 MB and up, it should stay at that level given there aren't any memory leaks. The memory consumption is generally affected by the number of dependencies and the complexity of the application.

Resources