I am running Mac OSX 12.1. I have some rails 6 apps and they are working normally in regard to memory usage.
But rails 5.2.6 has a strange memory issue. When I start a rails server or run rspec, the usage of the CPU is normal i.e. low when not much is happening but higher when large tests or heavy server usage occurs.
However, when I use rails console the CPU usage climbs almost immediately to over 100%, and pretty soon the Mac fan starts to run. It stays this way even if I exit the console. I have to do killall ruby to stop all ruby processes. I am monitoring memory using top -o cpu. Is this a known issue with rails 5.2.6? Is there a way to stop this happening?
I was experiencing the same problem for quite a while. I've eventually noticed that it's a problem with the listen gem. Updating it is enough to fix it.
Update it to the latest version possible. If I remember correctly, it has to be at least v3.3 for the problem to go away.
I suggest to check what initializes when you do rails c. You def should profile your startup times. Start checking your config/initializers folder and also consider Profiling Rails Boot Time. Also check how to Speed up Rails boot time.
I don't believe this is specific to 5.2.6 but make sure you use at least ruby 2.7.0 ish which also affects your performance.
Related
We have completed upgrading our app from Rails 4.2 to 5.2. When we run load testing on the 5.2 version it can only handle half the load of the 4.2 version. In looking at NewRelic stats during the load tests it seems to be slower everywhere - pretty much every request, ActiveRecord calls, redis calls, ruby, etc. We have confirmed it is not related to other upgrades that happened in addition - ruby upgrade, upgrading pg gem, or upgrading puma. While researching, the only performance issues I have found related to the upgrade have been fixed.
Has anyone run into something similar or have pointers on where to look?
What we have tried so far:
1. Check non-rails related upgrades that happened at the same time:
- Upgrade 4.2 branch to same version of ruby to see if that has any impact (no impact)
- Downgrade puma and pg gems in Rails 5 branch (no impact)
2. Examine performance traces for slower transactions and DB queries. Remove the slowest interactions from the load test to see if the overall slowest continues (it does).
3. Test if slowness appears in Rails 5.1 (it does).
What we are planning to try:
1. Test if slowness appears in Rails 5.0.
2. See if slowest can be detected in single user use rather than load test.
3. Use https://github.com/tleish/ruby-prof-rails to see if we can get more statistics to examine.
4. Downgrade all gems except the ones we absolutely need for the Rails 5 upgrade and see if problem still exists.
This ended up being a combination of Rack::Timeout, Heroku and Puma. Under heavy load, we would sometimes hit our Rake::Timeout value of 28 seconds. For some reason after the upgrade, the 2 seconds between the Rake::Timeout value and Heroku's 30 second router timeout (H12) was not enough. As a result, processes were getting killed by Heroku before Rake::Timeou which was causing a cascade effect in Puma that made a ton of other requests on the same server also timeout. We fixed it by adjusting our Rake::Timeout value to 25 seconds and everything worked.
Has anyone experienced this? We upgraded a project from Rails 5.2 to Rails 6.0.0 and after this, the memory consumption skyrocketed... In the release candidate environment, it works fine, but in production, the container dies because of the memory usage... The instances we have in 5.2 can do just fine with 1GB RAM, but the instances we test for Rails 6.0.0 dies immediately, even if we give them 4GB ram.
We already tested https://github.com/schneems/derailed_benchmarks, but the require gems just increased like 10 MiB in total.
We have some heavy queries but we don't know why this happens in Rails 6 and not in Rails 5.
This hugely depends on what gems you have.
Also, I would start by making use of a monitoring tool like Scout or NewRelic to understand where/how is the leak happening.
I have a Rails application that requires a bunch of environment stuff to get set up, and right now the easiest way for me to do it is to run a batch file to configure the environment and then launch the server from the command prompt. (Perhaps one day I will bite the bullet and transcribe all of the various environment variables into the project config, but I'd rather not...)
But when I do this, I occasionally manage to crash conhost.exe! It does not seem like I should be able to do this. Stranger still, it seems to happen most often if I access certain records in the application. I can't imagine it could crash if there were too much console output???
I am also having mscvrt-ruby.dll crashes, although I may have resolved those by doing some gem finagling. The conhost issue may or may not be related, I'm not sure. But if I launch the server from within RadRails, I don't seem to get these issues (the app doesn't completely work because of the missing environment stuff, but it seems much more stable).
Technicals: Windows 7, Rails 2.3.5, Ruby 1.93, Mongrel 1.2.0pre, uh, not sure what else...
Thoughts?
Why is Ruby, and Ruby on Rails (1.8.6 One Click Installer, local database) so ruddy slow on Windows?
ruby script/server - 30 seconds
rake test - 45 seconds
etc.
Yet, when I pop over to a much slower linux box, it's virtually instantaneous. I've checked everything - no significant CPU processes running, no network issues... and so on.
Heck, I'd be happy with just a verbose output that at least told me where it was breaking down. Any suggestions?
In general Ruby's MRI interpreter is just not optimized for speed on windows. You might also be running it in development mode on windows vs production mode on the other machines. Rails runs much slower in development mode since it reloads all your classes on every request.
1.8.6 is a very old ruby version. Released almost 3 years ago. You should strongly consider upgrading to 1.9 (or at least 1.8.7). Or switching to JRuby. All of these options will likely lead to a significant performance improvement.
1.8.7 should be fully compatible with 1.8.6. 1.9 has a completely new interpreter that runs 2.5 times faster (Though it has a tendency to occasionally crash on windows). JRuby may be the ideal solution for you since you can run it in either compatibility for 1.8 or 1.9 and it is very stable, but it does not support gems with C extensions and requires a different database adapter.
One last option would be to try running Rails inside of a VMWare with CentOS or another Linux distribution.
The reason is that file stat's in windows are dreadfully slow, and, since Ruby is written on Linux (and optimized for Linux), there hasn't been much work to make it faster.
Using the rubyinstaller.org (1.8.6 or 1.9.x) can make it faster--I'd recommend 1.8.6 since 1.9 has some slowdowns of its own.
If you're looking to get really aggressive, you can try my faster_gem_script gem, which tries to cache the heck out of require based look ups and thus speed things up. Do it with a scratch version of ruby, though :)
Unfortunately Jruby also isn't known for its exceedingly fast lookups. Hopefully this situation will change someday. Until then my faster_gem_script and faster_require are the only way I know of to try to get some speedup.
For a speedup you could try my loader speeder upper (helps rails run faster in doze): https://github.com/rdp/faster_require
Also checkout spork, which works in doze, and jruby also works well.
-rp
UPDATE: Thanks (in part) to some really great work on Fenix by Luis Lavena, Ruby 1.9.3-p327 is much, much faster on Windows. rake used to take 110+ seconds to execute on 1.9.3-p125, and now takes ~20 seconds on p327. Rails is finally usable on Windows!!
Use RubyInstaller to install..
I like taking this approach:
slow rails stack
In my case its
finisher_hook: 22.463 sec
That is the culprit
We are trying to update a Rails Server to release 5.1.
Server starts fine; but on the first request, goes completely dead; and has to be killed with signal 9.
Doesn't matter if its Puma or Webrick.
Doesn't matter if its 5.1.0 or 5.1.7
Doesn't matter if its development or production mode.
Eventually I saw the process size was 90GB and growing!
I've tried rbtrace, but struggled to get anything meaningful out of it.
I'm on osx, so strace isn't available, and I've struggled to get dtrace or dtruss to work, or produce anything meaningful.
So looking for a way to get this rails server to tell me what it's problem is....
Let me know what additional information is salient.
After quite a long process, I found a solution that didn't so much find the source of the issue, but provided a process to work around it.
First off, I used
rails app:update
And accepted all of the overwrites. Then using git, I walked through all of the removed code from my config file and returned just the required sections [like config/routes.rb, and ActionMailer config, for example].
Application then started right up, no issue.
This also led me to
http://railsdiff.org/5.0.7.2/5.1.0
Which is pretty critical for Rails upgrading. This is well worth consuming:
https://github.com/rails/rails/issues/31377#issuecomment-350422347