I have a rails application is deployed on Apache + Passenger + Rails 2.3.8(Ruby 1.8.7) + Linux server + MySQL 5.
I am trying to create an excel report by getting records from DB and download it.
When my report has < = 600(approx.) records, it get created and download successfully.
But when report contains more records, it does not get down load.
Query and logic processing completes in back-end and application server, but browser starts throwing connection time-out after some time.
I have tried increasing keepAlive time, also tried to modify browser settings. Nothing works for me.
As you didn't provide your code, I can only reply with a general answer.
on my opinion, letting a response time of a request be too long is always not ideal even if you can avoid time-out issue from your browser. you have two better choices:
if you don't need to reply the latest data, use cron job to generate your excel file and respond it when getting request. here is a good reference.
if you have to reply the latest data, divide the data in your database into many parts and replay them separately. (in this case, you may have to send request many times)
Related
I have a logging query (a simple INSERT) that happens on every single request.
For this request only (the one that happens on every page load), I want to set the limit to 500ms in case the database is locked/slow/down it won't affect the site, where the site hangs while it waits to connect/write.
Is there a way I can specify a timeout somehow on a per-query basis that I can abort the LoggedRequest.create! if it's taking too long?
I don't want to set it in my config because I have many other queries that shouldn't have timeouts that low.
I'm using Postgres 11.7
I also don't know how I feel about setting a timeout for the entire session because I don't want that connection to be shared from the pool with other queries that can't have that timeout.
Rails 6 introduces event based triggers for notifications, logging etc that comes in very handy, provided you are using/can afford to migrate to Rails 6. Here'a useful post that demonstrates creating event based triggers for notifications/logging: https://pramodbshinde.wordpress.com/2020/03/20/custom-events-tracking-with-activesupportnotifications-and-audited/
If, for some reason, you cannot use Rails 6, perhaps this article might help you find some answers: https://evilmartians.com/chronicles/the-silence-of-the-ruby-exceptions-a-rails-postgresql-database-transaction-thriller
If I were you, I could also contemplate using AJAX with a fire-and-forget API request to server for logging/whatever that is not critical to normal functioning of the application.
I am having some kind of a bug, where after a 30 second timeout, the request seem to be received again by the server.
the flow:
the user enters several (or many) image urls to a text field which is sent to the server
sources = "mysite.com/1.jpg mysite.com/2.jpg mysite.com/1.jpg"
the server splits by white space and for every url create a new image
sources.split(" ").map do |url|
Photo.create(source: url)
end
Photo is a model with paperclip attachment called img, and a source column
an after_action exists with self.img_remote_url = self.source to add the source as a source to the attachment
note that I am not using background processing here
The problem started today, but a few days ago it worked fine.
It occurs when the number of images causes the request to timeout (since the processing takes more than 30 seconds). Usually, it would just keep processing the images on the server. Now when the timeout occurs, seeing from the logs, the request seems to have been received again which processes all the files again.
At the point, (using puma), the second thread starts to work, but then again, reach a timeout.. This continues until all the threads, processes, and dynos are full and can't receive anymore requests. Which then the processing complete, having multiple photos created and very upset programmer (me)
It is unclear to me why the request would be resent to the server. It is not a client-side issue, since the duplication doesn't occur immediately, and by the logs it is right when the timeout error occurs (H12)
I am using puma 2.9.1, ruby 2.1.3, rails 4.1.6, heroku cedar, and latest paperclip.
Is there anything that causes a redo of a request when timeout?
How can I debug this?
Could it be a bug in heroku routing mechanism yet to be discovered?
Note: There are going to be things in this post which are less-than-best-practices. Be warned :)
I'm working on an admin dashboard which connects to a micro-instance AWS server.
The DB has tens of millions of records.
Most queries come back within a few seconds but some take up to a minute or two to return, based on a few things outside of my control.
Due to Heroku's 30-second limit (https://devcenter.heroku.com/articles/request-timeout), I need to find some way to buy time to keep the connection open until the query returns. Heroku does say that you can buy time by sending bytes to the client in the meantime, which buys you another 55 seconds.
Anyways, just curious if you guys have a solution to stall time for Heroku. Thanks!
I have made a workaround for this. Our app is running Sinatra and I have used EventMachine gem to keep writing \0 into stream every 10 seconds so Heroku doesn't close connection until action is complete, see the example https://gist.github.com/troex/31790323fb4a8a29c8b8cd84e50ad1e8
My example is using Puma but it should work for Unicorn and Thin as well (you don't need EventMachine.run for Thin). For Rails I think you can use before/after_action to start/stop event timer.
You could break down the thing into multiple queries.
You may send a query, have your AWS server respond immediately just saying that it received query and then once it pulls the data, have it send that data via a POST request to your Heroku instance.
Yes, do it via ajax, send back a response that says ask again in a bit...
I have a SOAP server/client application written in Delphi XE working fine for some time until a user who runs it on Windows 7 x64 behind a corporate proxy/firewall. The application sends and receives TSOAPAttachment object in the request.
The Problem:
Once the first request from this user is received and processed, the server could not process any request (from any user) successfully coming after this.
The server still response to the request, but the SOAPAttachment of the request
seems corrupted after the first one from this user, that's why it couldn't process the request successfully.
After putting may debug logs to the server, I noticed the TSOAPAttachment.SourceStream in the request's parameter become inaccessible (or empty), and TSOAPAttachment.CacheFile also empty. Therefore whenever trying to use the SourceStream, it will return Access Violation error.
Further investigation found that the BorlandSoapAttachment(n) file generated in the temp folder by the first request still exist and locked (which should be deleted when a request is completed normally), and BorlandSoapAttachment(n+1) files of the following request are piling up.
The SOAP server will work again after restarting IIS or recycle the application pool.
It is quite certain that it is caused by the proxy or the user’s networks because when the same machine runs outside this networks, it will work fine.
To add more mystery to the problem, running the application on WinXP behind the same proxy have no problem AT ALL!
Any help or recommendation is very appreciated as we have stuck in this situation for some time.
Big thanks in advance.
If you are really sure that you debugged all your server logic that handles the attachments to attempt do discover any piece of code that could failed specifically on Windows 7, I would suggest:
1) Use some network sniffer Wireshark is good for this task, make two subsequent requests with the same data/parameters values, and compare the HTTP contents. This analyze should be done both in the client (to see if the data is always leaving the client machine with the same content) and also in the server, to analyze the incoming data;
2) I faced a similar situation in the past, and my attempts to really understand the problem was not well succeed. I did workaround the problem sending files as Base64 encoded strings parameters, and not using the SOAP attachments. The side affect of using Base64 its an increase of ~30% in the data size to be sent, and this could be significant if you are transferring large files.
Remember that SOAP attachments create temp files in the server, and Windows 7 has different file access rules than Windows XP. I don't know if this could explain the first call being processed ant the others not, but maybe there are something related with file access.
Maybe it is UAC (User Access Control) problem under Win 7. Try running the client in win 7 "As Administrator" and see if it is working properly.
My situation is like this:
1. User uploads 150MB zip file, with 600 files inside. It takes 4 minutes or so to upload the file to the server.
2. Server processes the file contents, takes 70 seconds or so.
3. The server responds with Service Unavailable, with a log like, "could not forward the response to the client... stop button was clicked"
4. The Rails application log says, 200 OK response was returned.
So, I am guessing it must be a problem within one of Nginx or Passenger that is causing it return with the error even thought it is going fine inside the Rails app. My suspect is a timeout setting, because I could reproduce it by just putting a sleep of 180 seconds inside the long running method and doing nothing.
I will appreciate if you guys know what specific nginx/passenger config may fix it.
If you're using S3 as your storage you may consider using something like carrierwave_direct to skip passing the file through the web server and instead upload directly to S3.
Like noted above you could incorporate a queueing process like delayed_job.
https://github.com/dwilkie/carrierwave_direct
I presume that nginx is the public-facing server and it proxies requests through to another server running your RoR application for you. If this assumption is correct, you may need to increase the value of your nginx proxy_read_timeout setting for the specific locations that are causing you trouble.
For long run request, I think you should return an 'please wait' page immediately and make the processing background. After the processing is completed, set the task in the database as 'completed'. Within the period, whenever user refresh the page, return 'please wait' immediately. After completed, return the result. You can set an autorefresh timeout in the page to refresh the page after an estimated period.
I'd instantly store the upload somewhere and redirect to a "please wait" page which asks for the status of the background processing and could even display some progress bar then, e.g. using ajax.
For the actual background processing I'd recommend DelayedJob which worked great for us and supports easy job deployment and implementation.