I have a rails application that takes some user input, does hours of processing on it and returns the file to the user. Because this takes so long to process, the client can sometimes time out receiving a "connection reset by peer" error.
To alleviate that issue I want to just upload the file to AWS instead of returning it to the user. My question is, if the connection times out before all of the processing and uploading is done, does rails immediately stop processing the controller action?
Related
Basically this question asked multiple time but in my point, I can't able to clear out those so I want your side help for this error.
My app was complete with all decided features but it was giving me following error multiple time in log.
App is totally custom web server based and I was using GoDaddy server for placing admin panel content and for webservices.
In all loading pages , app has to call different web services. In this its work normally with good response but some time it will take so much time in loading page details and I got above kind of error log in XCode.
If next time, I call same service then its respond me quickly but some time, it creating problem for me.
From coding side, I am very much sure its just delay in server response because process become stuck in waiting of server response.
I am having some kind of a bug, where after a 30 second timeout, the request seem to be received again by the server.
the flow:
the user enters several (or many) image urls to a text field which is sent to the server
sources = "mysite.com/1.jpg mysite.com/2.jpg mysite.com/1.jpg"
the server splits by white space and for every url create a new image
sources.split(" ").map do |url|
Photo.create(source: url)
end
Photo is a model with paperclip attachment called img, and a source column
an after_action exists with self.img_remote_url = self.source to add the source as a source to the attachment
note that I am not using background processing here
The problem started today, but a few days ago it worked fine.
It occurs when the number of images causes the request to timeout (since the processing takes more than 30 seconds). Usually, it would just keep processing the images on the server. Now when the timeout occurs, seeing from the logs, the request seems to have been received again which processes all the files again.
At the point, (using puma), the second thread starts to work, but then again, reach a timeout.. This continues until all the threads, processes, and dynos are full and can't receive anymore requests. Which then the processing complete, having multiple photos created and very upset programmer (me)
It is unclear to me why the request would be resent to the server. It is not a client-side issue, since the duplication doesn't occur immediately, and by the logs it is right when the timeout error occurs (H12)
I am using puma 2.9.1, ruby 2.1.3, rails 4.1.6, heroku cedar, and latest paperclip.
Is there anything that causes a redo of a request when timeout?
How can I debug this?
Could it be a bug in heroku routing mechanism yet to be discovered?
I am building a website, and I have an administrator page. The admin will have to run a reporting task, meaning that, the task will iterate all the records fetch information and generate a pdf file. Now this will be heavy on the app and the database.
What is the usual approach for it ? Should I have a button that calls a method of a class or should I have a rake task? I heard that HTTP GET requests have a limit and if the report generation takes more than that then it kills the request.
I would like to use send_data(....) so the user is given a nice download pop up box when the report is done. Will it be better to use a mailer and email it?
Thanks
We have similar functionality in our Rails apps at my job.
We have one URL/action that initiates the request to generate the PDF file, and returns right away saying the request was started successfully.
Then we have another action that we can poll with AJAX that returns whether or not the report is complete, and when it is complete, it gives the user the PDF.
The actual generation is done by a Sidekiq worker which is not subject to the webserver timeout.
My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.
My Controller waits for a response from a lib folder ruby file which it is calling.
lib.rb runs for about 4 minutes and returns a string as result.
The controller waits for the string response and then application breaks after a minute showing Internal Server Error
what should i Do?
Regards
Can you describe in more detail what this "lib" is doing? If it is intended to take a long time you will have to adjust your server settings so it is not prematurely timed out.
For example, Apache will produce a server error if it does not receive a response from your application in a timely manner. This time can be adjusted within certain parameters.