Request redo on timeout - ruby-on-rails

I am having some kind of a bug, where after a 30 second timeout, the request seem to be received again by the server.
the flow:
the user enters several (or many) image urls to a text field which is sent to the server
sources = "mysite.com/1.jpg mysite.com/2.jpg mysite.com/1.jpg"
the server splits by white space and for every url create a new image
sources.split(" ").map do |url|
Photo.create(source: url)
end
Photo is a model with paperclip attachment called img, and a source column
an after_action exists with self.img_remote_url = self.source to add the source as a source to the attachment
note that I am not using background processing here
The problem started today, but a few days ago it worked fine.
It occurs when the number of images causes the request to timeout (since the processing takes more than 30 seconds). Usually, it would just keep processing the images on the server. Now when the timeout occurs, seeing from the logs, the request seems to have been received again which processes all the files again.
At the point, (using puma), the second thread starts to work, but then again, reach a timeout.. This continues until all the threads, processes, and dynos are full and can't receive anymore requests. Which then the processing complete, having multiple photos created and very upset programmer (me)
It is unclear to me why the request would be resent to the server. It is not a client-side issue, since the duplication doesn't occur immediately, and by the logs it is right when the timeout error occurs (H12)
I am using puma 2.9.1, ruby 2.1.3, rails 4.1.6, heroku cedar, and latest paperclip.
Is there anything that causes a redo of a request when timeout?
How can I debug this?
Could it be a bug in heroku routing mechanism yet to be discovered?

Related

Chromium Edge - Javascript seems to be affected by automatic checks for Edge updates

We have a single page web application. One of the functions of the application is to supervise the connection path from the client back to the server. This is implemented with a periodic ajax http request in javascript to the server every 60 seconds. This request acts as a heartbeat.
After a session is started, the server looks for that heartbeat. If it fails to receive a heartbeat request after a reasonable amount of time, it takes specific action.
The client also looks for a response to that heartbeat request. If it fails to receive a response after a reasonable amount of time, it displays a message on the screen via javascript.
We are getting reports from the field where a Chrome version of Edge is failing. Communication between the client and server is apparently failing. The server is seeing those heartbeat requests cease – and taking that specific action. However, the client is not taking the expected action on its side. It’s not displaying the message indicating a failed heartbeat request. It’s almost appears as though the javascript stopped running altogether.
The thing is, though… The customer has reported that if they disable automatic updates to Microsoft Edge the application runs fine. If the checking of updates is allowed to occur, the application eventually fails as described above. Note that this is apparently happening when Edge is just checking for updates - it's already up to date.
Updates are being turned off using several guid-named registry keys at [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\EdgeUpdate].
Any thoughts?

Rails stop processing controller after time out

I have a rails application that takes some user input, does hours of processing on it and returns the file to the user. Because this takes so long to process, the client can sometimes time out receiving a "connection reset by peer" error.
To alleviate that issue I want to just upload the file to AWS instead of returning it to the user. My question is, if the connection times out before all of the processing and uploading is done, does rails immediately stop processing the controller action?

Connection time out in Rails application

I have a rails application is deployed on Apache + Passenger + Rails 2.3.8(Ruby 1.8.7) + Linux server + MySQL 5.
I am trying to create an excel report by getting records from DB and download it.
When my report has < = 600(approx.) records, it get created and download successfully.
But when report contains more records, it does not get down load.
Query and logic processing completes in back-end and application server, but browser starts throwing connection time-out after some time.
I have tried increasing keepAlive time, also tried to modify browser settings. Nothing works for me.
As you didn't provide your code, I can only reply with a general answer.
on my opinion, letting a response time of a request be too long is always not ideal even if you can avoid time-out issue from your browser. you have two better choices:
if you don't need to reply the latest data, use cron job to generate your excel file and respond it when getting request. here is a good reference.
if you have to reply the latest data, divide the data in your database into many parts and replay them separately. (in this case, you may have to send request many times)

Nginx, passenger, rails - how to configure for long running requests?

My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.

Rails/Passenger/Unknown Content Type

We have the following situation:
We invoke a url which runs an action in a controller. The action is fairly long running - it builds a big string of XML, generates a PDF and is supposed to redirect when done.
After 60 seconds or so, the browswer gets a 200, but with content type of "application/x-unknown-content-type" no body and no Response Headers (using Tamper to look at headers)
The controller action actually continues to run to completion, producing the PDF
This is happening in our prod environment, in staging the controller action runs to completion, redirecting as expected.
Any suggestions where to look?
We're running Rails 2.2.2 on Apache/Phusion Passenger.
Thanks,
I am not 100% sure, but probably your Apache times out the request to Rails application. Could you try to set Apache's Timeout directive higher? Something like:
Timeout 120
I'd consider bumping this task off to a job queue and returning immediately rather than leaving the user to sit and wait. Otherwise you're heading for a world of problems when lots of people try to use this and you run out of available rails app instances to handle any new connections.
One way to do this easily might be to use an Ajax post to trigger creating the document, drop this into Delayed Job and then run a 10 second periodic check via ajax informing the waiting user of the jobs status. Once delayed_job has finished processing your task in the background and updated something in the database to indicate it is complete, then you can redirect the user via ajax to the newly created document.

Resources