My development Rails 5 server with Puma keeps freezing and hanging when sending multiple requests at one time from my separate frontend app to the Rails API. There is no error, it just hangs on the POST requests. When I try to kill the server with CTRL + C, nothing happens. I have to manually kill the port.
I've tried setting config.eager_load=true in development.rb. I've tried adding config.allow_concurrency in application.rb. I've Googled relentlessly to no avail. I am sending around 5 requests concurrently from frontend, so I believe this amount of requests is causing it, but I don't know for sure.
Has anyone else experienced this or have an idea of what needs to be done here? I can usually get all the requests coming back to the frontend successfully around 3-4 times, then the server just freezes.
It especially occurs after I change any one line of code in any file in the project while the server is running.
It's been nearly 2 years but I finally happened to stumble upon what had been causing my issue.
Basically it boiled down to a method in my code not being thread-safe. Since my current_user variable was only accessible from my controller, I had a before_action on my base controller to assign the current user to User.current so that I could access the current user globally via User.current, not just in my controllers.
So PLEASE make sure you're not dynamically updating classes like I this in your controllers. It is not thread-safe. I ended up following this thread-safe solution instead for my particular case: https://stackoverflow.com/a/2513456/7629239
What is your puma configuration? How many threads and workers(Puma workers not rails workers).
Ensure that your puma has enough threads, and that your db pool is large enough. Changing a line of code should not cause your server to get exhausted in resources. Are you using a watcher like watchman?
Related
I have a very weird situation: I have a system where a client app (Client) makes an HTTP GET call to my Rails server, and that controller does some handling and then needs to make a separate call to the Client via a different pathway (i.e. it actually goes via Rabbit to a proxy and the proxy calls the Client). I can't change the pathway for that different call and I can't change the Client at all (it's a 3rd party system).
However: the issue is: the call via the different pathway fails UNLESS the HTTP GET from the client is completed.
So I'm trying to figure out: is there a way to have Rails finish the HTTP GET response and then make this additional call?
I've tried:
1) after_filter: this doesn't work because the after filter is apparently still within the Request/Response cycle so the TCP/HTTP response back to the Client hasn't completed.
2) enqueuing a worker: this works, but it is not ideal because if the workers are backed up, this call back to the client may not happen right away and it really does need to happen right after the Client calls the Rails app
3) starting a separate thread: this may work, but it makes me nervous: adding threading explicitly in Rails could be fraught with peril.
I welcome any ideas/suggestions.
Again, in short, the goal is: process the HTTP GET call to the Rails app and return a 200 OK back to the Client, completely finishing the HTTP request/response cycle and then call some extra code
I can provide any further details if that would help. I've found both #1 and #2 as recommended options but neither of them are quite what I need.
Ideally, there would be some "after_response" callback in Rails that allows some code to run but after the full request/response cycle is done.
Possibly use an around filter? Around filters allow us to define methods that wrap around every action that rails calls. So if I had an around filter for the above controller, I could control the execution of every action, execute code before calling the action, and after calling it, and also completely skip calling the action under certain circumstances if I wanted to.
So what I ended up doing was using a gem that I had long ago helped with: Spawnling
It turns out that this works well, although it required a tweak to get it working with Rails 3.2. It allows me to spawn a thread to do the extra, out-of-band callback to the Client, but let the normal, controller process complete. And I don't have to worry about thread management, or AR connection management. Spawnling handles that.
It's still not ideal, but pretty close. And it's slightly better than enqueuing a Resque/Sidekiq worker as there's no risk of worker backlog causing an unexpected delay.
I still wish there was an "after_response_sent" callback or something, but I guess this is too unusual a request.
I am writing an app that uses Server Side events with ActionController::Live. It is using the puma app server. A method in the Messages controller stays alive while the user is connected waiting for messages from Redis.
The problem is that I don't want to connect to Postgres on this method. After I open the app in six tabs it has over five connections defined by the pool size in the config/database.yml file and the app crashes.
Is there anyway to tell my app when that method is called it doesn't need to connect to the database as there is no ActiveRecord query calls in it?
One possible way to do this is to use middleware. A good resource for setting up your own middleware is http://railscasts.com/episodes/151-rack-middleware?view=asciicast
However, I'm not convinced that the problem you're experiencing is because of too many connections to Postgres. This is just a hunch, but I think your problem may lie elsewhere.
My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.
I am trying show controller specific pages in my rails app when the database connection goes away. I do this by catching the Mysql::Error in the rescue_action method and rendering appropriate pages. When the mysql service alone is stopped , i get the Mysql::Error exception really quickly and i could render the pages without any delay.
But when the server itself is shut down, rails takes 3 mins to throw the Mysql::Error and after 5-6 request the whole website becomes unresponsive.
I tried to figure out, which method in rails framework takes such a long time , when the mysql server is shut down. It was a method connection.real_connect (in the active record mysql_adapter file),which took 3 mins to return with an exception.
so i decided to timeout out this method using systemTimer gem. This monkey patch worked perfectly, when i start the website with database connection and immediately shutdown the database server.
But when i start the website with database, and access the website for sometime and then shut down the database server, it doest work at all. and the whole website becomes unresponsive as before. I wonder what is the difference between the two scenarios.
I think i need to know more in detail about how rails handle database connection . how it reacts when the database connection goes off. so that i could identify exact places where i can put monkey patches and make it work for my spefic requirement. I havent seen any relevant article explaining this.
Any help will be very useful for me
Thanks,
I've not tried this, but you can add connect_timeout as one of the specified options (along with port, host, etc) for the MySQL connection in the database.yml file. That value is passed to the real_connect call to establish the connection to MySQL.
Furthermore, since you are experiencing a delay after the initial connection is made and the DB is shutdown, you may need to use the read_timeout config option.
I have an edge case, although a very customer visible one, where Tomcat begins processing requests before all dependencies are properly loaded for a Ruby on Rails stack running underneath JRuby.
Once Tomcat is restarted, there is something similar to the following happening:
undefined method `utc_offset' for nil:NilClass
[RAILS_ROOT]/gems/gems/activesupport-2.3.8/lib/active_support/values/time_zone.rb:206:in `<=>'
This happens when the following code is invoked on one of my services:
#timezones = ActiveSupport::TimeZone.all
If you wait a few more seconds and refresh the requesting page, it'll load no problem.
Is there a way to ensure that Tomcat does not start processing these requests until the entire stack, ActiveSupport, ActiveRecord etc is loaded? Has anyone experienced any similar symptoms?
This sounds like a possible bug in JRuby-Rack, assuming that's what you're using to run your Rails app in Tomcat. JRuby-Rack is supposed to load the entirety of config/environment.rb before it will process requests, so I'm not sure how this would happen to you, but perhaps I've overlooked something. Could you share some more data (or maybe code or an app that reproduces the issue) about how you induced the error at http://kenai.com/jira/browse/JRUBY_RACK or http://bugs.jruby.org?
I'm not sure if there is something like that in Tomcat directly, but you can write a javax.servlet.Filter that will intercept all requests, and deny them until your application is loaded. When application is fully loaded, you ask filter to stop denying requests. (This isn't pure Ruby solution though).