Heroku Rails Memory exceed - ruby-on-rails

I've developed web site using Ruby on rails.
But i have a serious problems.
Increasing continuously Heroku memory usage.
Heroku Response Time is too long sometimes.
I've tried a lot to solve this problem.
When visit user list page Without activerecord query execution, there is no memory increasing.
When visit user list page 20 times in normal, there is memory increasing.
So I added tunemygc gem for garbage collecting and tested. But no impaction.
So i think this Reasons of Memory issues are
Rails has such issue.
Dependencies of activerecord is not going well or there is bad dependency.
Does anyone knows the way to solve this problem.
Want to test the app by virtualizing requests ike as user does. Any idea?
Want to solve the memory issues clearly. Any ideas?
Does this solve by setting Heroku server configuration properly?

Related

How to debug an ajax request raising "Error R15 (Memory quota vastly exceeded)"

I have a Rails app on Heroku that is crashing with Error R15 (Memory quota vastly exceeded).
I've tracked this issue to pages that contain several asynchronous requests. The errors appears to coincide with ajax requests to build remote datatables.
The problem is, I can't figure out why these errors are being raised.
I thought perhaps the databse queries and controller actions behind the ajaxified datatables might be running slowly. But if I examine these in development using miniprofiler, the requests appear very efficent.
Then I thought, perhaps the server is receiving multiple simultaneous requests, and this is overloading the heroku dyno. But I ramped the dynos up to a very high number, and still see the error.
What would be a sensible way to start identfying and debugging what is causing this memory error? I've not had to solve a issue like this before.
Memory is allocated per-dyno on Heroku so adding more dynos will probably not actually solve the problem if it is code-level since it will cause each dyno to exceed its memory limit individually costing you lots of money and not actually solving the problem.
You're better off scaling horozontally and using Performance-L dynos. This will increase each dyno up to 14GB of memory. You can then use metrics to see how much memory is being used. If the amount of user memory manages to use up all 14GB then you may have a memory leak in one of your dependencies.

Heroku memory issues using puma

I have checked my logs and ever since starting using puma (Switched from unicorn which didn't have this issue) as my web server on heroku I have what appears to be a memory leak problem.
The server itself is idle and the logs show no requests, yet my memory utilization on web dynos keeps rising to the limit and then overquota. Any ideas or suggestions on how to look into this?
I cannot provide an answer, but I am researching the same issue. So far, the two following links have proved most educational to me:
https://github.com/puma/puma/issues/342. A possible work-around (though supposedly not vetted for Heroku production) is to use the puma-worker-killer gem: https://github.com/schneems/puma_worker_killer. Hope this helps.
In the end I had to go to a dyno type (Performance Large) with more RAM to accommodate the memory caching that Ruby/Rails was doing. I couldn't find a way to stop it from peaking around 2.5GB, but it did indeed level off after that.
I was running into this and in the fall of 2019 Heroku added a Config Var to new apps, but it has to be manually added to apps created before then.
MALLOC_ARENA_MAX=2
They have a write up about it here:
https://devcenter.heroku.com/changelog-items/1683
You can also try out using Jemalloc https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html

Rails Server Memory Leak/Bloating Issue

We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!
This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.

Rails Memory Error R14 on Heroku (blank screen with reload??)

Using: Rails 3.0.3. Webhost: Heroku.com. 2 dynos & 0 worker.
I am a bit of a beginner using Rails and just released my first project. The users are experiencing intermittent problems that, according to the users, are "I get a blank screen with a message that the page needs to reload". Unfortunately I cannot get it better explained than that (one way-communication channel from the users).
I also get this error in the logs:
2011-11-09T19:00:12+00:00 heroku[web.1]: Process running mem=598M(116.8%)
2011-11-09T19:00:12+00:00 heroku[web.1]: Error R14 (Memory quota exceeded)
which seems pretty straightforward.
I have about 4 000 visitors a day and about 10 000 page views.
Edit: I also have New Relic and Exception notifier installed. I get a lot of "Execution expired" problems.
What I would like to know now is:
How can I find these intermittent errors (I have no timestamps). What should I search for in the logs (what string)?
Do memory problems cause the web browser to crash and reload (or something similar)? Or, is that related to java-problems?
Most importantly: How can I test my application to see where it is the most memory intensive? I know I have not made it with perfect coding so I need to find the bad parts.
Once again, this is my first project so the solutions might be easy but please help me out.
Are you using ImageMagick (specifically RMagick)? People have reported issues with its memory management in the past: https://groups.google.com/group/dragonfly-users/browse_thread/thread/67f88d9a2e085b7a?pli=1&auth=DQAAAIUAAABUdJ8RK3XRKIAvXno2rkOsd8OzwcKqNX3T21NjURsvINiRoHH-S_786Si2mphcOdRDmfGrjir6hBMLwj4xv6LE89Dd62ng2xmCArP3lcZZbw7-wXCBNS5BiaSeDVy-z46gHUHiVC21vEMWOBKMYMn7kMnJZhWXr1EcfZqb1KQNaGhwal2KLCmYxThW99pWLtE
Install the New Relic Standard addon - that will give you insight into your application and what's going on. The 'Dynos' tab will show you memory utilisation of your application, it sounds like an awfully high memory utilisation for the level of traffic you're reporting but it depends on your application - if you're seeing memory errors in the log then performance will be suffering see http://devcenter.heroku.com/articles/error-codes#r14__memory_quota_exceeded
Are you using any kind of error handling? You could install the Airbrake addon so you get notification of errors or use the Exception Notifier gem which will email you errors as they occur. Once you have these in place you'll know what's occuring - whether it's in the application or if you don't receive any then it's outside factors, like the visitors internet connection etc.

Rails 3 memory issue

I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work.
Now though I'm having HUGE problems with Rails consuming huge ammounts of memory.
For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare.
For my test app I just generated a model with a scaffold and then created about 20 records on this model.
I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request.
I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development.
Does anyone recognize my problem?
I would really appreciate some help here. :/
Thanks!
Well, you should consider how a production Rails app would be served. For example, the above setting (with regards to caching) is typically enabled for the production environment and you should also compare performance with your app running under Passenger (Apache or Nginx).
I do believe there is an easy means to force Passenger to play nicely in dev mode as well.
There were some memory leakage issues in the Rails 3 betas. Is there a reason you're not on 3.0.6?
Edit: D'oh, just saw the date this was asked.

Resources