My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.
Related
My development Rails 5 server with Puma keeps freezing and hanging when sending multiple requests at one time from my separate frontend app to the Rails API. There is no error, it just hangs on the POST requests. When I try to kill the server with CTRL + C, nothing happens. I have to manually kill the port.
I've tried setting config.eager_load=true in development.rb. I've tried adding config.allow_concurrency in application.rb. I've Googled relentlessly to no avail. I am sending around 5 requests concurrently from frontend, so I believe this amount of requests is causing it, but I don't know for sure.
Has anyone else experienced this or have an idea of what needs to be done here? I can usually get all the requests coming back to the frontend successfully around 3-4 times, then the server just freezes.
It especially occurs after I change any one line of code in any file in the project while the server is running.
It's been nearly 2 years but I finally happened to stumble upon what had been causing my issue.
Basically it boiled down to a method in my code not being thread-safe. Since my current_user variable was only accessible from my controller, I had a before_action on my base controller to assign the current user to User.current so that I could access the current user globally via User.current, not just in my controllers.
So PLEASE make sure you're not dynamically updating classes like I this in your controllers. It is not thread-safe. I ended up following this thread-safe solution instead for my particular case: https://stackoverflow.com/a/2513456/7629239
What is your puma configuration? How many threads and workers(Puma workers not rails workers).
Ensure that your puma has enough threads, and that your db pool is large enough. Changing a line of code should not cause your server to get exhausted in resources. Are you using a watcher like watchman?
I am having some kind of a bug, where after a 30 second timeout, the request seem to be received again by the server.
the flow:
the user enters several (or many) image urls to a text field which is sent to the server
sources = "mysite.com/1.jpg mysite.com/2.jpg mysite.com/1.jpg"
the server splits by white space and for every url create a new image
sources.split(" ").map do |url|
Photo.create(source: url)
end
Photo is a model with paperclip attachment called img, and a source column
an after_action exists with self.img_remote_url = self.source to add the source as a source to the attachment
note that I am not using background processing here
The problem started today, but a few days ago it worked fine.
It occurs when the number of images causes the request to timeout (since the processing takes more than 30 seconds). Usually, it would just keep processing the images on the server. Now when the timeout occurs, seeing from the logs, the request seems to have been received again which processes all the files again.
At the point, (using puma), the second thread starts to work, but then again, reach a timeout.. This continues until all the threads, processes, and dynos are full and can't receive anymore requests. Which then the processing complete, having multiple photos created and very upset programmer (me)
It is unclear to me why the request would be resent to the server. It is not a client-side issue, since the duplication doesn't occur immediately, and by the logs it is right when the timeout error occurs (H12)
I am using puma 2.9.1, ruby 2.1.3, rails 4.1.6, heroku cedar, and latest paperclip.
Is there anything that causes a redo of a request when timeout?
How can I debug this?
Could it be a bug in heroku routing mechanism yet to be discovered?
Note: There are going to be things in this post which are less-than-best-practices. Be warned :)
I'm working on an admin dashboard which connects to a micro-instance AWS server.
The DB has tens of millions of records.
Most queries come back within a few seconds but some take up to a minute or two to return, based on a few things outside of my control.
Due to Heroku's 30-second limit (https://devcenter.heroku.com/articles/request-timeout), I need to find some way to buy time to keep the connection open until the query returns. Heroku does say that you can buy time by sending bytes to the client in the meantime, which buys you another 55 seconds.
Anyways, just curious if you guys have a solution to stall time for Heroku. Thanks!
I have made a workaround for this. Our app is running Sinatra and I have used EventMachine gem to keep writing \0 into stream every 10 seconds so Heroku doesn't close connection until action is complete, see the example https://gist.github.com/troex/31790323fb4a8a29c8b8cd84e50ad1e8
My example is using Puma but it should work for Unicorn and Thin as well (you don't need EventMachine.run for Thin). For Rails I think you can use before/after_action to start/stop event timer.
You could break down the thing into multiple queries.
You may send a query, have your AWS server respond immediately just saying that it received query and then once it pulls the data, have it send that data via a POST request to your Heroku instance.
Yes, do it via ajax, send back a response that says ask again in a bit...
I issue a simple GET request to my server, and it's coming back after ~1.2 seconds on average (using firebug NET tab, the "waiting for reqponse" part- not even the whole reponse time)
My ping to the server is 0.250
Using Passenger with rails 2.3.3, in the rails log the request is taking ~0.023
My server is on GoDaddy, so I checked their homepage with firebug also- the "waiting for reqponse" time for their page is ~0.320
Worst case should be around 0.4... so where did I lose the other 0.8 seconds?
What else can I check?
Edit:
Seems like it's unrelated to rails-
An image request (that only apache responds to, doest hit the rails at all) takes ~1.2 seconds also
GoDaddy may have a reverse-proxy between you and your HTTP server.
They may be doing something like sending you the response headers right away, then possibly serving you the contents of the response from cache.
So, from the standpoint of your HTTP server, the response is transmitted. Then it goes to GoDaddy's reverse-proxy, then finally to your web browser.
Try setting PassengerPoolIdleTime to 0 in your Servers or VHosts configuration.
Maybe your server is shutting down the application instances to fast and spawns a new instance with every request which usualy takes quite long.
Take a look at the documentation for more information on this setting:
http://modrails.com/documentation/Users%20guide%20Apache.html#PassengerPoolIdleTime
Where your files are hosted from for GoDaddy is not the same as where their homepage is hosted from.
Have you checked other pages you have hosted on the same server? Possibly due to database connections or "slow" connections like that can cause the page to take awhile before it's sent back to the client.
Doesn't sound like it is your problem, but the ISP's.
Can you do a wget to an internal ip/port to your rails app directly (or apache) from the same server?
That will tell you if the probaby is in the app stack or further upstream.
If you can, you can use apache tool, called ab "apache benchmark" to help.
The key is having a ssh access to your computer.
We have the following situation:
We invoke a url which runs an action in a controller. The action is fairly long running - it builds a big string of XML, generates a PDF and is supposed to redirect when done.
After 60 seconds or so, the browswer gets a 200, but with content type of "application/x-unknown-content-type" no body and no Response Headers (using Tamper to look at headers)
The controller action actually continues to run to completion, producing the PDF
This is happening in our prod environment, in staging the controller action runs to completion, redirecting as expected.
Any suggestions where to look?
We're running Rails 2.2.2 on Apache/Phusion Passenger.
Thanks,
I am not 100% sure, but probably your Apache times out the request to Rails application. Could you try to set Apache's Timeout directive higher? Something like:
Timeout 120
I'd consider bumping this task off to a job queue and returning immediately rather than leaving the user to sit and wait. Otherwise you're heading for a world of problems when lots of people try to use this and you run out of available rails app instances to handle any new connections.
One way to do this easily might be to use an Ajax post to trigger creating the document, drop this into Delayed Job and then run a 10 second periodic check via ajax informing the waiting user of the jobs status. Once delayed_job has finished processing your task in the background and updated something in the database to indicate it is complete, then you can redirect the user via ajax to the newly created document.