Heroku memory issues using puma - memory

I have checked my logs and ever since starting using puma (Switched from unicorn which didn't have this issue) as my web server on heroku I have what appears to be a memory leak problem.
The server itself is idle and the logs show no requests, yet my memory utilization on web dynos keeps rising to the limit and then overquota. Any ideas or suggestions on how to look into this?

I cannot provide an answer, but I am researching the same issue. So far, the two following links have proved most educational to me:
https://github.com/puma/puma/issues/342. A possible work-around (though supposedly not vetted for Heroku production) is to use the puma-worker-killer gem: https://github.com/schneems/puma_worker_killer. Hope this helps.

In the end I had to go to a dyno type (Performance Large) with more RAM to accommodate the memory caching that Ruby/Rails was doing. I couldn't find a way to stop it from peaking around 2.5GB, but it did indeed level off after that.

I was running into this and in the fall of 2019 Heroku added a Config Var to new apps, but it has to be manually added to apps created before then.
MALLOC_ARENA_MAX=2
They have a write up about it here:
https://devcenter.heroku.com/changelog-items/1683
You can also try out using Jemalloc https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html

Related

Heroku Rails Memory exceed

I've developed web site using Ruby on rails.
But i have a serious problems.
Increasing continuously Heroku memory usage.
Heroku Response Time is too long sometimes.
I've tried a lot to solve this problem.
When visit user list page Without activerecord query execution, there is no memory increasing.
When visit user list page 20 times in normal, there is memory increasing.
So I added tunemygc gem for garbage collecting and tested. But no impaction.
So i think this Reasons of Memory issues are
Rails has such issue.
Dependencies of activerecord is not going well or there is bad dependency.
Does anyone knows the way to solve this problem.
Want to test the app by virtualizing requests ike as user does. Any idea?
Want to solve the memory issues clearly. Any ideas?
Does this solve by setting Heroku server configuration properly?

Rails Server Memory Leak/Bloating Issue

We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!
This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.

Should I use thin or unicorn on Heroku Cedar

I recently 'upgraded' my app to the cedar platform on Heroku. By default, I am using thin as a web server. But I have always been tempted to use unicorn for concurrency and having my dyno dollar go father. But I worry there are some gotchas in using something other than Thin.
Does anyone have real-life experience with this decision?
Notes:
This was the article that got me excited about the idea: http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/
I know every app is different, and that you should build a staging env and try it for yourself. But if it looks great in your staging env, are they any pitfalls that we should know about?
I want to know reasons why everyone shouldn't do this
Update -- 3 months later.
I have been using unicorn in production for 3 months, and I have been very pleased. I use 4 unicorn workers per dyno.
One thing you do need to keep an eye out for is memory consumption and leakage. In effect instead of having 512MB of memory per dyno -- you have that divided by the number of heroku workers. But keeping that in mind -- it has been a great cost saver
No reason not to do it - I use Unicorn on Heroku with much success.
Heroku has just written a post about using Unicorn : https://blog.heroku.com/archives/2013/2/27/unicorn_rails
I'll try it now, it seems like it's the way to go, both for performance and cost saving.
If you use Thin, and your code doesn't clear requests very quickly, then you're in trouble - since Heroku uses random routing, requests will stack up on a blocked dyno even if there are free dynos. Using Unicorn seems better, if you can handle the memory hit, because it's less likely that all of your forks will get slow requests at the same time. It doesn't solve Heroku's random-routing problem, but it should help a lot.
Links and explanations in this answer:
https://stackoverflow.com/a/19965981/1233555

Rails app (mod_rails) hangs every few hours

I'm running a Rails app through Phusion Passenger (mod_rails) which will run smoothly for a while, then suddenly slow to a crawl (one or two requests per hour) and become unresponsive. CPU usage is low throughout the whole ordeal, although I'm not sure about memory.
Does anyone know where I should start to diagnose/fix the problem?
Update: restarting the app every now and then does fix the problem, although I'm looking for a more long-term solution. Memory usage gradually increases (initially ~30mb per instance, becomes 40mb after an hour, gets to 60 or 70mb by the time it crashes).
New Relic can show you combined memory usage. Engine Yard recommends tools like Rack::Bug, MemoryLogic or Oink. Here's a nice article on something similar that you might find useful.
If restarting the app cures the problem, looking at its resource usage would be a good place to start.
Sounds like you have a memory leak of some sort. If you'd like to bandaid the issue you can try setting the PassengerMaxRequests to something a bit lower until you figure out what's going on.
http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMaxRequests
This will restart your instances, individually, after they've served a set number of requests. You may have to fiddle with it to find the sweet spot where they are restarting automatically before they lock up.
Other tips are:
-Go through your plugins/gems and make sure they are up to date
-Check for heavy actions and requests where there is a lot of memory consumption (NewRelic is great for this)
-You may want to consider switching to REE as it has better garbage collection
Finally, you may want to set a cron job that looks at your currently running passenger instances and kills them if they are over a certain memory threshold. Passenger will handle restarting them.

Mongrel hangs after several hours

I'm running into a problem in a Rails application.
After some hours, the application seems to start hanging, and I wasn't able to find where the problem was. There was nothing relevant in the log files, but when I tried to get the url from a browser nothing happened (like mongrel accept the request but wasn't able to respond).
What do you think I can test to understand where the problem is?
I might get voted down for dodging the question, but I recently moved from nginx + mongrel to mod_rails and have been really impressed. Moving to a much simpler setup will undoubtedly save me headaches in the future.
It was a really easy transition, I'd highly recommend it.
Are you sure the problem is caused by Mongrel? Have you tried running your application under WEBrick?
There are a few things you can check, but since you say there's nothing in the logs to indicate error, it sounds like you might be running into a bug when using the log rotation feature of the Logger class. It causes mongrel to lock up. Instead of relying on Logger to rotate your logs, consider using logrotate or some other external log rotation service.
Does this happen at a set number of hours/days every time? How much RAM do you have?
I had this same problem. The couple options I had narrowed it down to were MySQL adapter related. I was running on Red Hat Enterprise Linux 4 (or 5) and the app would hang after a given amount of idle time.
One suggested solution was to compile native MySQL bindings, I had been using the pure Ruby one.
The other was to set the timeout on the MySQL adapter higher than what the connection would idle out on. (I don't have the specific configuration recorded, but as I recall it was in environment.rb and it was some class variable in the mysql adapter.)
I don't recall if either of those solutions fixed it, we moved to Ubuntu shortly after that and hadn't had a problem since.
Check the Mongrel FAQ:
http://mongrel.rubyforge.org/wiki/FAQ
From my experience, mongrel hangs when:
the log file got too big (hundreds of megabytes in size). you have to setup log rotation.
the MySQL driver times out
you have to change the timeout settings of your MySQL driver by adding this to your environment.rb:
ActiveRecord::Base.verification_timeout = 14400
(this is further explained in the deployment section of the FAQ)
Unfortunately, Rails (and thereby Mongrel) using up too much memory over time and crashing is a known problem (50K+ Google entries for "Ruby, rails, crashing, memory"). The current ruby interpreter has the property that it sometimes simply fails entirely to give memory back to the system - it may reuse the memory it has but it won't give it up.
There are numerous schemes for monitoring, killing and restarting Mongrel instances in a production environment - for example: (choosing at random) rails monitor . Until the problem is fixed more decisively, one of these may be your best bet.
We have experienced this same issue. First off, install the mongrel_proctitle gem
http://github.com/rtomayko/mongrel_proctitle/tree/master
This gem/plugin will allow you to view the mongrel processes via "ps" and you can see if a Mongrel is hung. An issue we have seen with Mongrel is that it will happily accept connections and enqueue them, then wedge itself. This plugin will help you see when a Mongrel has been wedged but then you must use another monitoring app to actually restart a a wedged Mongrel, something like Monit or God
You might also want to consider putting a more balanced reverse proxy in front of your Mongrels, something HAproxy, instead of nginx, Apache or Lighttpd. With a setting of "maxconn 1" in HAproxy you can assure that the queue is being maintained by HAproxy versus Mongrel. The other reverse proxies (nginx, Apache, Lighttpd) only do round-robin which means that they can load up your Mongrel queue, inadvertently.
My personal choice is God as it is much more flexible.
tl;dr Install this gem plugin and keep an eye on your Mongrels. Try Apache+Phusion Passenger.

Resources