Is nginx timeout or rails timeout - ruby-on-rails

I deployed rails app with nginx and passenger,there is a long running rails action, something like this:
def handler
sleep 100
respond_to do |format|
format.json { render :json=>{:success=>true}}
end
end
However, the nginx error log print this message:
Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser.
Obviously,this error don't called by click stop button. Maybe nginx timeout or rails time out or passenger timeout, how to solve this problem, is it possible be solved by configure file. Is there any help

Nginx isn't getting a response from Passenger in time.
You can increase the timeout via nginx config via the max_execution_time.
The standard way to handle this would be to kick the sleep 100 to a background job (DJ, Resque et, I personally prefer resque as my go to but DJ is easy to setup and there are an almost infinite number of pre-built offers available).
Then you would return a 202 which means "accepted" and is used to indicate that the processing of the request is not complete but the server did receive the request.

Related

Rails Exception Notification 500 Errors

I'm running the Exception Notification gem on Rails 5. I have it setup the default way in config/environments/production.rb:
Rails.application.config.middleware.use ExceptionNotification::Rack,
email: {
# deliver_with: :deliver, # Rails >= 4.2.1 do not need this option since it defaults to :deliver_now
email_prefix: '[MFTA Error Notification] ',
sender_address: %{"notifier" <almost#got.me>},
exception_recipients: %w{butnotquite#gmail.com}
}
This works fine for standard errors when the site is up...
But shouldn't it send me a report on 500 server errors a well? Very randomly... about once a month or so... the rails app will crash on me and I'll need to redeploy it to get it to work again. But I won't even know that the site's down without a notification.
So is there some separate config... or even another Gem... to let me know when this happens?
Since your app is hosted in aws, you can setup a healthcheck endpoint in your app and use a lambda function to ping it periodically. If there’s no 200 response, its very likely your app is down as serving a healthcheck is a dead simple thing which shouldnt fail.
Normally people would set a threshold say X consecutive health check failure within Y duration to verify that the app is down. But this would require your lambda function to be stateful. If you dont mind getting false alarm due to say deployment or server restart, you can forget about this.
Also, if you want the health check to be more performant, you can just implement your rack middleware to intercept this healthcheck request and return a 200 response. In that sense teh request doesnt has to go through all the stacks until it reach Rails

avoid dead locks in recursive web calls on rails 3

let's say I've sent a get request to some action in some controller in rails.
and in that action I'm sending requests to get web pages from another server.
for example :
open("http://example.com/myexample.xml")
when i call this function using localhost as a parameter the site requests it self so the server goes in a dead lock state and stops
any ideas to get page of localhost without making the requests queued on the main thread ?
the same problem happens when the main thread sleeps or get busy to process a request and another request comes to the server ... it waits till the first request is finished.
any solutions for that ?
You can run another server instance:
rails s # http://localhost:3000
rails s -p 3001 # http://localhost:3001
Then you can send requests from localhost:3001 to localhost:3000 or on the contrary.
I prefer to use unicorn as second server
rails s # http://localhost:3000
unicorn # http://localhost:8080

Nginx, passenger, rails - how to configure for long running requests?

My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.

In rails ,Mysql.real_connect takes like 3 mins for every request when the mysql server is shut down

I am trying to render few static pages in my rails app when the mysql server is shut down. I tried to catch the Mysql::Error exception and render the corresponding static page for each controller.
When we just stop the mysql service in the machine where the mysql is installed. The Mysql::Error exception is thrown immediately and i am able to render the pages without any delay. but if i just shut down the server. The whole website becomes irresponsive.
I traced down the actual function in the rails framework , which is taking 3 mins to complete. It was this statement
Mysql.real_connect
in the active_record gem. which takes so long. Is there any way i can give a time out so that , when the mysql server is powered off. it returns with the Mysql::Error exception really quickly so that i can render the pages without any delay??
This is probably coming from the socket timeout within the mysql adapter. When the service is stopped, the server will respond quickly with a connection refused error. When the server itself is down, the socket will have to get a connection timeout before it returns. What you'll probably have to do is monkey patch the #real_connect method so that it first validates that the server is running by attempting a socket connection (with a timeout) before continuing on with the original implementation. This question may be of some help to you there:
How do I set the socket timeout in Ruby?
dbh = Mysql.init
dbh.options(Mysql::OPT_CONNECT_TIMEOUT, 6)

Rails/Passenger/Unknown Content Type

We have the following situation:
We invoke a url which runs an action in a controller. The action is fairly long running - it builds a big string of XML, generates a PDF and is supposed to redirect when done.
After 60 seconds or so, the browswer gets a 200, but with content type of "application/x-unknown-content-type" no body and no Response Headers (using Tamper to look at headers)
The controller action actually continues to run to completion, producing the PDF
This is happening in our prod environment, in staging the controller action runs to completion, redirecting as expected.
Any suggestions where to look?
We're running Rails 2.2.2 on Apache/Phusion Passenger.
Thanks,
I am not 100% sure, but probably your Apache times out the request to Rails application. Could you try to set Apache's Timeout directive higher? Something like:
Timeout 120
I'd consider bumping this task off to a job queue and returning immediately rather than leaving the user to sit and wait. Otherwise you're heading for a world of problems when lots of people try to use this and you run out of available rails app instances to handle any new connections.
One way to do this easily might be to use an Ajax post to trigger creating the document, drop this into Delayed Job and then run a 10 second periodic check via ajax informing the waiting user of the jobs status. Once delayed_job has finished processing your task in the background and updated something in the database to indicate it is complete, then you can redirect the user via ajax to the newly created document.

Resources