I am interested in ruby httpclient timeout such as send_timeout and receive_timeout. So what does these values actually mean? For instance, default send_timeout is 60s, then does it mean http connection will last for 60s? Exactly speaking, if the file to be upload is 61MB and network speed is 1MB/s, then can I upload the file? Will it cause send timeout exception?
For instance, default send_timeout is 60s, then does it mean http connection will last for 60s.
Exactly speaking, if the file to be upload is 61MB and network speed is 1MB/s, then it will cause send timeout exception?
Related
I'm doing some file uploads that sends to an nginx reverse proxy. If I set the python requests timeout to 10 seconds and upload a large file, nginx will report client prematurely closed connection and forward an empty body to the server. If I remove the requests timeout, the file uploads without any issues. As I understand it, the timeout should only apply if the client fails to receive or send any bytes, which I don't believe is the case as it's in the middle of uploading the file. It seems to behave more like a time limit, cutting the connection after 10 seconds with no exception being raised by requests. Is sending bytes different than reading bytes for timeout? I haven't set anything for stream or tried any type of multi-part. I would like to set a timeout but confused as to why the connection is getting aborted early - thanks for any help.
I am doing a bulk insert of large number of documents in CouchDB and when multiple bulk inserts are done in parallel, I'm getting 504 Gateway Timeout from CouchDB. I think the HTTP requests are getting timed out. How do I increase this timeout?
My app is built on rails and the web server is puma.
I need to load data from database and it takes more than 60 seconds to load all of them. Every time I send a get request to the server, I have to wait more than 60 seconds.
The timeout of request get is 60 seconds, so I always get 504 gateway timeout. I can't find the place to change the request timeout in puma configuration.
How can I set the request timeout longer than 60 seconds?
Thanks!
UPDATE: Apparently worker_timeout is NOT the answer, as it relates to the whole process hanging, not just an individual request. So it seems to be something Puma doesn't support, and the developers are expecting you to implement it with whatever is fronting Puma, such as Nginx.
ORIGINAL: Rails itself doesn't time out, but use worker_timeout in config/puma.rb if you're running Puma. Example:
worker_timeout (246060) if ENV['RAILS_ENV']=='development'
Source
The 504 error here is with the gateway in front of the rails server, for example it could be Cloudflare, or nginx etc.
So the settings would be there. You'd have to increase the timeout there, as well as in rails/puma.
Preferably, you should be optimizing your code and queries to respond in faster duration of time so that in production environment there is no bottleneck with large traffic coming on your application.
If you really want to increase the response time then you can use rack timeout to do this:
https://github.com/kch/rack-timeout
Pyramid (v1.5) application served by gunicorn (v19.1.1) behind nginx on a heroic BeagleBone Black "server".
One specific request requires significant I/O and processor time on the server (data exporting from database, formatting to xls and serving)
which results in gunicorn worker timeout and a 'Bad gateway' error returned by nginx.
Is there a practical way to handle this per request instead of increasing the global request timeout for all requests?
It is just this one specific request so I'm looking for the quickest and dirtiest solution instead of implementing a correct, asynchronous client notification protocol.
From the docs:
timeout¶
-t INT, --timeout INT
30
Workers silent for more than this many seconds are killed and restarted.
Generally set to thirty seconds. Only set this noticeably higher if you’re sure of the repercussions for sync workers. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request.
graceful_timeout
--graceful-timeout INT
30
Timeout for graceful workers restart.
Generally set to thirty seconds. How max time worker can handle request after got restart signal. If the time is up worker will be force killed.
keepalive
--keep-alive INT
2
The number of seconds to wait for requests on a Keep-Alive connection.
Generally set in the 1-5 seconds range.
I have a Rails (v3.2.13, Ruby 2.0.0) application running on nginx + Unicorn (Ubuntu 12.04). All is working well, except when an admin user is uploading users (thousands) via a CVS file. The problem is that I have set timeout to 30 seconds and the import process takes much more time. So, after 30 seconds I get an nginx 502 Bad Gateway page (Unicorn worker is killed).
The obvious solution is to increase timeout, but I don't want this because it'll cause another problems (I guess), because it's not a typical behavior.
Is there a way to handle this kind of problems?
Thanks a lot in advance.
PS: Maybe a solutions is to modify the code. If so, I want to avoid the user to perform another request.
Some ideas (don't know if possible):
Setup a worker dedicated to this request.
Send a "work in progress" signal to Unicorn to avoid to be killed.
nginx-app.conf
upstream xxx {
server unix:/tmp/xxx.socket fail_timeout=0;
}
server {
listen 80;
...
location / {
proxy_pass http://xxx;
proxy_redirect off;
...
proxy_connect_timeout 360;
proxy_send_timeout 360;
proxy_read_timeout 360;
}
}
unicorn.rb
worker_processes 2
listen "/tmp/xxx.socket"
timeout 30
pid "/tmp/unicorn.xxx.pid"
This is a good reason to create a queue.
And you will:
upload csv file (that should be within 30sec)
your background job that will import user data (that can go for hours…)
while this job is in progress you can serve some kind of WIP page with job status/percents/etc.
Check https://github.com/resque/resque for example. There is a lot of other queues.
Is there a way to handle this kind of problems?
Do the job in background. You should have a separate process that gets jobs from queue one by one and processes them. And since it doesn't work with user requests, it can do its job as long as needed. You don't need unicorn for this, just a separate daemon.