Or something like that. I've upgraded several apps from Rails 3.0 to Rails 3.1. They are are running on Thin with the multithreaded option enabled, they are all configured with threadsafe!. Worked great in Rails 3.0. But in 3.1, after a few requests, things start slowing down. After a few more requests, the pages hangs for N seconds (where N is my db timeout) and I get this error:
ActiveRecord::ConnectionTimeoutError (could not obtain a database connection within 5 seconds. The max pool size is currently 5; consider increasing it.):
Those default values of 5 and 5 have been find in the past and should still be fine. Also increasing them does not fix the problem, though it takes a little longer to manifest. I should stress that during these times I have been the only one accessing the apps. When I drop Thin down to single-threaded mode, everything works as expected.
This occurs with MySQL, SQLite, Ruby 1.8.7 and Ruby 1.9.2. Thin is unchanged. The only variable I can find is the upgrade to Rails 3.1. Is there anything I can do, or is this a regression in Rails?
Looks like this is a bug. There's a patch and pull request for it. Hopefully Rails will merge it in. Until then, multi-threaded Rails apps won't work on Thin.
https://github.com/rails/rails/pull/1670
Related
I have a problem on my Rails application.
I am in version 3.2.22 of rails and 2.2.5 of ruby connect to a mongodb 2.6.
The problem is that I have huge difference in performance on simple or even more complex queries.
For example :
I run rails c development and then I execute my function (quite complex) it responds after 30 seconds
I run rails c production, I perform the same function as the previous one, it responds after 6 minutes 30 seconds, 7 times slower.
So I try to copy pasted the configuration 'development' in 'production', but the result remains the same, same for the Gemfile.
I look in all the code of the project no difference between the environment production and development.
Do you know the differences in the heart of rails between these two environments? did anyone ever encounter the problem?
Importantly, I am of course connecting to the same database.
Thanks in advance.
You have not specified your mongo (Ruby driver) and mongoid versions, if they are old you may need to upgrade and/or adjust the code to your environment.
To determine whether the slowdown happens in the database or in your application, use command monitoring as described here: https://docs.mongodb.com/ruby-driver/current/tutorials/ruby-driver-monitoring/#command-monitoring
Look at the log entries corresponding to your queries and make note of how log they take in each environment. By implementing a custom event subscriber you can also save the commands being sent and verify that they are identical between the two environments.
I got this!
When I saw the number of requests in production, I immediately thought of the query cache.
I found the 'identity_map_enabled' parameter for mongo, so I changed it to true, and hop magic!
Recently I upgraded our EC2 instances to c3.large in order to boost more unicorn workers to handle increased traffic to our site. (6xworkers per machine - 2 ec2 instances).
When I did this, I started seeing duplicate records being created! It looks like somehow, that maybe 2 unicorn workers are attempting to process the same request?
In my nginx logs I see ONE [POST] request to a particular rails controller - yet two records are being created in the database. Obviously I'm hitting some kind of race condition issue - but have no idea how to debug this. It doesn't happen all the time, but when it does it's frustrating as well. I also noticed that the two database records are within 5 seconds of each other. (And yes I disable the button when the user clicks the submit button).
Unicorn 4.8.2
Ruby 1.9.3
Rails 3.2.14
Any help would be great! Thanks!
Although it may not fix your problem, upgrade to Ruby 2.0.0. There are some blog mentioning that is a good idea to upgrade to Ruby 2.0.0 if you are using Unicorn.
For example: https://www.digitalocean.com/community/articles/how-to-optimize-unicorn-workers-in-a-ruby-on-rails-app
Specifically:
If you are using Ruby 1.9, you should seriously consider
switching to Ruby 2.0. To understand why, we need to understand a
little bit about forking.
Forking and Copy-on-Write (CoW)
When a child process is forked, it is
the exact same copy as the parent process. However, the actual
physical memory copied need not be made. Since they are exact copies,
both child and parent processes can share the same physical memory.
Only when a write is made-- then we copy the child process into
physical memory.
So how does this relate to Ruby 1.9/2.0 and Unicorn?
Recall the Unicorn uses forking. In theory, the operating system would
be able to take advantage of CoW. Unfortunately, Ruby 1.9 does not
make this possible. More accurately, the garbage collection
implementation of Ruby 1.9 does not make this possible. An extremely
simplified version is this — when the garbage collector of Ruby 1.9
kicks in, a write would have been made, thus rendering CoW useless.
Without going into too much detail, it suffices to say that the
garbage collector of Ruby 2.0 fixes this, and we can now exploit CoW.
Running a pretty sizable rails app, we've recently gotten around to upgrading it to rails 3.
Our stack is ruby-1.9.3p484, rails 3.2.16 and passenger 4.0.23 running on top of apache.
After throwing some traffic at a couple of our machines, we started noticing a few really strange errors coming down.
Things like random methods not being defined on objects that would obviously have them, instance variables being nil inside AR associations, and objects just being randomly replaced with 'false'. Just all around strange behavior.
Inspecting apache's logs gave us another bit of info, namely that as these errors were coming in, more often than not, their respective processes would keel over as well, on random bits of the app.
Sometimes it would be just a ruby node getting passed in as null, other times it would be just some random string overflowing, just random stuff getting mangled about.
None of this happened during testing, so the only 'reliable' way of reproducing this thus far has been to just throw traffic at the respective machines and see when / if they start exhibiting this behavior.
Having gone through all of this, here's a list of things we've ruled out up to now:
passenger's oob garbage collection
rails 3 itself ( apparently we'd been getting these before as well, but they were far enough apart to not set off any alarms )
serialization / shoving things in and out of memcached
libxml - there were some reports about of version 2.5.0 causing memory corruption, upgrading to 2.7.0 didn't really make a difference
turning off prelinking ( this can cause memory corruption, per https://www.ruby-forum.com/topic/205897 )
Returning the GC settings to stock seems to have alleviated the problem, but we don't really have anything conclusive in that regard. It would seem though that more collections result in a lower occurrence rate for the issue.
Any thoughts on what might be causing this or we could use to help us pinpoint the issue?
I've had 1.9.3-p484 segfault at_exit on test runs as well but I haven't looked into it yet. In my case it seems to be triggered by certain dependencies for the test suite.
We've also had problems with another project on Rails 3 that ended up abandoning their port to Rails 3 and sticking with 2.3 ):
Have you tried running the app outside of Apache / Passenger?
I work with several Rails apps, some on Rails 3.2/Ruby 2.0, and some one Rails 2.3/Ruby 1.8.7.
What they have in common is that, as they've grown and added more dependencies/gems, they take longer and longer to start. Development, Test, Production, console, it doesn't matter; some take 60+ seconds.
What is the preferred way to first, profile for what is causing load times to be so slow, and two, improve the load times?
There are a few things that can cause this.
Too many GC passes and general VM shortcomings - See this answer for a comprehensive explanation. Ruby <2.0 has some really slow bits that can dramatically increase load speeds; compiling Ruby with the Falcon or railsexpress patches can massively help that. All versions of MRI Ruby use GC settings by default that are inappropriate for Rails apps.
Many legacy gems that have to be iterated over in order to load files. If you're using bundler, try bundle clean. If you're using RVM, you could try creating a fresh gemset.
As far as profiling goes, you can use ruby-prof to profile what happens when you boot your app. You can wrap config/environment.rb in a ruby-prof block, then use that to generate profile reports of a boot cycle with something like rails r ''. This can help you track down where you're spending the bulk of your time in boot. You can profile individual sections, too, like the bundler setup in boot.rb, or the #initialize! call in environment.rb.
Something you might not be considering is DNS timeouts. If your app is performing DNS lookups on boot, which it is unable to resolve, these can block the process for $timeout seconds (which might be as high as 30 in some cases!). You might audit the app for those, as well.
Ryan has a good tutorial about speeding up tests, console, rake tasks: http://railscasts.com/episodes/412-fast-rails-commands?view=asciicast
I have checked every methods there and found "spring" the best. Just run the tasks like:
$ spring rspec
The time for your first run of spring will be same as before, but the second and later will be much faster.
Also, from my experience, there will be time you need to stop spring server and restart when there is weird error, but the chance is rare.
For ruby 2 apps, try zeus - https://github.com/burke/zeus
1.8 apps seem to boot much faster than 1.9, spork might help? http://railscasts.com/episodes/285-spork
I'm developing a new site based on Ruby on Rails 3 beta. I knew this might be a bad idea considering it's just beta, but I still thought it might work.
Now though I'm having HUGE problems with Rails consuming huge ammounts of memory.
For my application today it consumes about 10 mb per request and it doesn't seem to release it either. So I thought this might be because of bloat in my application and thus I created a test app just to compare.
For my test app I just generated a model with a scaffold and then created about 20 records on this model.
I then went to the index page and hit refresh and I could immediately see memory taking off! Less than my app but still about 1-3 mb per request.
I'm working in OSX Leopard, with Ruby 1.8.7, Rails 3.0.0.beta and a SQLLite db for development.
Does anyone recognize my problem?
I would really appreciate some help here. :/
Thanks!
Well, you should consider how a production Rails app would be served. For example, the above setting (with regards to caching) is typically enabled for the production environment and you should also compare performance with your app running under Passenger (Apache or Nginx).
I do believe there is an easy means to force Passenger to play nicely in dev mode as well.
There were some memory leakage issues in the Rails 3 betas. Is there a reason you're not on 3.0.6?
Edit: D'oh, just saw the date this was asked.