Performance of rails app goes down due to log - ruby-on-rails

I have a rails app, where the speed of the application reduces significantly as the size of the log file increases. I need to deleted my log file(backup) frequently to prevent this. What is the best practice to avoid this.
Regards,
Pankaj

On a production environment, the ideal is to set logrotate rules for those logs (preferably daily).
We do it and never had performance issues due to logs.
Here's a brief article on how to use it.

Related

Heroku memory issues using puma

I have checked my logs and ever since starting using puma (Switched from unicorn which didn't have this issue) as my web server on heroku I have what appears to be a memory leak problem.
The server itself is idle and the logs show no requests, yet my memory utilization on web dynos keeps rising to the limit and then overquota. Any ideas or suggestions on how to look into this?
I cannot provide an answer, but I am researching the same issue. So far, the two following links have proved most educational to me:
https://github.com/puma/puma/issues/342. A possible work-around (though supposedly not vetted for Heroku production) is to use the puma-worker-killer gem: https://github.com/schneems/puma_worker_killer. Hope this helps.
In the end I had to go to a dyno type (Performance Large) with more RAM to accommodate the memory caching that Ruby/Rails was doing. I couldn't find a way to stop it from peaking around 2.5GB, but it did indeed level off after that.
I was running into this and in the fall of 2019 Heroku added a Config Var to new apps, but it has to be manually added to apps created before then.
MALLOC_ARENA_MAX=2
They have a write up about it here:
https://devcenter.heroku.com/changelog-items/1683
You can also try out using Jemalloc https://www.speedshop.co/2017/12/04/malloc-doubles-ruby-memory.html

How to examine ruby processes for performance problems?

I have an issue with a rails worker that is consuming extreme amounts of processor time. Oddly I have not been able to trace it out so far. I've tried to use New Relic, however I can't seem to trace it down within the worker itself.
How can I profile and really explore performance issues in detail so as to find precise locations of performance problems such as this?
Have you tried Ruby-Prof? It's easy to use and seems to return accurate numbers. (Of course it's hard to argue with it when running an app that's busy.)

Rails Server Memory Leak/Bloating Issue

We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!
This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.

Does having Rails logging enabled on a production environment a performance hit?

I hope this question isn't too vague, but does logging in a production environment cause a hit in performance? In addition to the traditional production.log logging, we have a couple of additional things we record in begin/rescue type events to help us for debugging issues.
In our production.rb file, our settings are:
config.log_level = :info
config.active_support.deprecation = :log
And we also have some:
TRACKER_LOG.warn xml_response_hash
These files can become quite large (1 or 2 GB each) and our website receives a couple million page views a month. Chould minimizing our use of logs on production help with performance?
Logging does impact on performance, but it can still be useful in production if it allows the people running the service to diagnose problems without taking the service down.
That said, a couple of million hits a month is less than 100k per day (on average) and that shouldn't be too much of a worry. Similarly, a few GB of log files should not be a worry provided the service is deployed sanely — and provided you're using a log rotation strategy of course — since disk space is pretty cheap. Thus at current levels, I'd suggest you should be OK. Keep an eye on it though; if traffic suddenly spikes (e.g., to 1M hits in a normal day) you could have problems. Document this! You don't want the production people to be surprised by these sorts of things.
Consider making the extra logging conditional on a flag that you can disable or enable at runtime so that you only collect anything large if you're looking for it; with usual volumes of logging data there's a good chance that you'll only look for problems occasionally anyway.

Rails app (mod_rails) hangs every few hours

I'm running a Rails app through Phusion Passenger (mod_rails) which will run smoothly for a while, then suddenly slow to a crawl (one or two requests per hour) and become unresponsive. CPU usage is low throughout the whole ordeal, although I'm not sure about memory.
Does anyone know where I should start to diagnose/fix the problem?
Update: restarting the app every now and then does fix the problem, although I'm looking for a more long-term solution. Memory usage gradually increases (initially ~30mb per instance, becomes 40mb after an hour, gets to 60 or 70mb by the time it crashes).
New Relic can show you combined memory usage. Engine Yard recommends tools like Rack::Bug, MemoryLogic or Oink. Here's a nice article on something similar that you might find useful.
If restarting the app cures the problem, looking at its resource usage would be a good place to start.
Sounds like you have a memory leak of some sort. If you'd like to bandaid the issue you can try setting the PassengerMaxRequests to something a bit lower until you figure out what's going on.
http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMaxRequests
This will restart your instances, individually, after they've served a set number of requests. You may have to fiddle with it to find the sweet spot where they are restarting automatically before they lock up.
Other tips are:
-Go through your plugins/gems and make sure they are up to date
-Check for heavy actions and requests where there is a lot of memory consumption (NewRelic is great for this)
-You may want to consider switching to REE as it has better garbage collection
Finally, you may want to set a cron job that looks at your currently running passenger instances and kills them if they are over a certain memory threshold. Passenger will handle restarting them.

Resources