Override request timeout in pyramid/gunicorn - timeout

Pyramid (v1.5) application served by gunicorn (v19.1.1) behind nginx on a heroic BeagleBone Black "server".
One specific request requires significant I/O and processor time on the server (data exporting from database, formatting to xls and serving)
which results in gunicorn worker timeout and a 'Bad gateway' error returned by nginx.
Is there a practical way to handle this per request instead of increasing the global request timeout for all requests?
It is just this one specific request so I'm looking for the quickest and dirtiest solution instead of implementing a correct, asynchronous client notification protocol.

From the docs:
timeout¶
-t INT, --timeout INT
30
Workers silent for more than this many seconds are killed and restarted.
Generally set to thirty seconds. Only set this noticeably higher if you’re sure of the repercussions for sync workers. For the non sync workers it just means that the worker process is still communicating and is not tied to the length of time required to handle a single request.
graceful_timeout
--graceful-timeout INT
30
Timeout for graceful workers restart.
Generally set to thirty seconds. How max time worker can handle request after got restart signal. If the time is up worker will be force killed.
keepalive
--keep-alive INT
2
The number of seconds to wait for requests on a Keep-Alive connection.
Generally set in the 1-5 seconds range.

Related

Delay on requests from Google API Gateway to Cloud Run

I'm currently seeing delays of 2-3 seconds on my first requests coming into our APIs.
We've set the min instances to 1 to prevent cold start but this a delay is still occurring.
If I check the metrics I don't see any startup latencies in the specified timeframe so I have no insights in what is causing these delays. Tracing gives the following:
The only thing I can change, is switching to "CPU is always allocated" but this isn't helping in any way.
Can somebody give more information on this?
As mentioned in the Answer :
As per doc :
Idle instances As traffic fluctuates, Cloud Run attempts to reduce the
chance of cold starts by keeping some idle instances around to handle
spikes in traffic. For example, when a container instance has finished
handling requests, it might remain idle for a period of time in case
another request needs to be handled.
Cloud Run But, Cloud Run will terminate unused containers after some
time if no requests need to be handled. This means a cold start can
still occur. Container instances are scaled as needed, and it will
initialize the execution environment completely. While you can keep
idle instances permanently available using the min-instance setting,
this incurs cost even when the service is not actively serving
requests.
So, let’s say you want to minimize both cost and response time latency
during a possible cold start. You don’t want to set a minimum number
of idle instances, but you also know any additional computation needed
upon container startup before it can start listening to requests means
longer load times and latency.
Cloud Run container startup There are a few tricks you can do to
optimize your service for container startup times. The goal here is to
minimize the latency that delays a container instance from serving
requests. But first, let’s review the Cloud Run container startup
routine.
When Starting the service
Starting the container
Running the entrypoint command to start your server
Checking for the open service port
You want to tune your service to minimize the time needed for step 1a.
Let’s walk through 3 ways to optimize your service for Cloud Run
response times.
1. Create a leaner service
2. Use a leaner base image
3. Use global variables
As mentioned in the Documentation :
Background activity is anything that happens after your HTTP response
has been delivered. To determine whether there is background activity
in your service that is not readily apparent, check your logs for
anything that is logged after the entry for the HTTP request.
Avoid background activities if CPU is allocated only during request processing
If you need to set your service to allocate CPU only during request
processing, when the Cloud Run service finishes handling a
request, the container instance's access to CPU will be disabled or
severely limited. You should not start background threads or routines
that run outside the scope of the request handlers if you use this
type of CPU allocation. Review your code to make sure all asynchronous
operations finish before you deliver your response.
Running background threads with this kind of CPU allocation can create
unpredictable behavior because any subsequent request to the same
container instance resumes any suspended background activity.
As mentioned in the Thread reason could be that all the operations you performed have happened after the response is sent.
According to the docs the CPU is allocated only during the request processing by default so the only thing you have to change is to enable CPU allocation for background activities.
You can refer to the documentation for more information related to the steps to optimize Cloud Run response times.
You can also have a look on the blog related to use of Google API Gateway with Cloud Run.

Rails long running controller action and scaling to 500-1000 requests per second

I'm currently trying to optimize and scale an API built on Ruby on Rails behind an AWS ALB that sends traffic to NGINX and then into Puma to our Rails application. Our API has a timeout option of 30 seconds maximum which is when we eventually timeout the request. Currently we have a controller action that queues a Sidekiq worker and then we poll a Redis key every 100ms for the first 1 second and then move to polling every 500ms for the remaining 29 seconds. Many of our requests can be completed in under 1 second, but some of them will take the full 30 seconds before they succeed or timeout, telling the user to retry in a little while.
We're currently trying to load test this API and scale it to 500-1000 RPS and we're running into problems where the slower requests will block up all of our connections. When a slow request is running shouldn't Puma be able to take other requests in during the sleep period of the slow requests?
If this was not an API we could easily just immediately respond after we queue the background worker, but in this case we need to wait for the response and hold the connection for up to 30 seconds for the API request.
My first thought is that you can have multiple redis queues and push specific tasks to certain queues.
If you have a queue for the quick tasks and a queue for the slower tasks, then both can run in parallel without the slow tasks holding everything else up.

how to determine no of requests in production

we are running rails app in production (single master node) with nginx as web and puma as rack server and wanted to calculate no of request our server can handle. I know there are tools available like ApacheBench which works like
ab -k -c 350 -n 20000 example.com
It takes few parameters like in above command 350 simultaneous connections until 20 thousand requests. This approach can give req per seconds count for a single URL. But, I am interested in determining request per seconds for dynamic system which serves dynamic content.
Is there any built-in tool which can give me request per second count.
Way Around (Manual Calculations)
I have installed an Analytics tool rorvswild it is very similar to NewRelic. It gives me response time of every route I have in ruby on rails application. It also gave average response time of all the routes, which is 250ms If average response time of system is T can I calculate no of request system can handle will be 1000/T?
Also I am running puma behind NGINX, which is multithreaded and running 5 threads so eventually
no of requests per second = thread_count * (1000/T)
In my case thread_count = 5
Thank You so much for reading, you suggestions will be very helpful.

Configure unicorn on heroku

I follow these links for configuration
https://devcenter.heroku.com/articles/rails-unicorn
http://www.neilmiddleton.com/getting-more-from-your-heroku-dynos/
my config/unicorn.rb:
worker_processes 2
timeout 60
With this config, it still gives a timeout error after 30sec.
The Heroku router will timeout all requests at 30 seconds. You cannot reconfigure this.
See https://devcenter.heroku.com/articles/request-timeout
It is considered a good idea to set the application level timeouts to a lower value than the hard 30 second limit so that you don't leave dynos processing requests that the router has already timed out.
If you have requests that are regularly taking longer than 30 seconds you may need to push some of the work involved onto a background worker process.

Using Puma and Sidekiq in a backend Rails app

I have a backend Rails server with Sidekiq, which serves as API server. The app works as follow:
My Rails server receives many requests from incoming API clients at the same time.
For each of these requests, the Rails server will allocate jobs to a Sidekiq server. Sidekiq server makes requests to external APIs (such as Facebook) to get data, and analyze it and return a result to Rails server.
For example, if I receive 10 incoming requests from my API clients, for each request, I need to make 10 requests to external API servers, get data and process it.
My challenge is to make my app responds to incoming requests concurrently. That is, for each incoming request, my app should process in parallel: make calls to external APIs, get data and return result.
Now, I know that Puma can add concurrency to Rails app, while Sidekiq is multi-threaded.
My question is: Do I really need Sidekiq if I already have Puma? What would be the benefit of using both Puma and Sidekiq?
In particular, with Puma, I just invoke my external API calls, data processing etc. from my Rails app, and they will automatically be concurrent.
Yes, you probably do want to use Puma and Sidekiq. There are really two issues at play here.
Concurrency (as it seems you already know) is the number of web requests that can be handled simultaneously. Using an app server like Puma or Unicorn will definitely help you get better concurrency than the default web brick server.
The other issue at play is the length of time that it takes your server to process a web request.
The reason that these two things are related is that number or requests per second that your app can process is a function of both the average processing time for each request and the number of worker processes that are accepting requests. Say your average response time is 100ms. Then a single web worker can process 10 requests per second. If you have 5 workers, then you can handle 50 requests per second. If your average response time is 500ms, then you can handle 2 reqs/sec with a single worker, and 10 reqs/sec with 5 workers.
Interacting with external APIs can be slow at times, and in the worst cases it can be very unreliable with unresponsive servers on the remote end, or network outages or slowdowns. Sidekiq is a great way to insulate your application (and your end users) from the possibility that the remote API is responding slowly. Imagine that the remote API is running slowly for some reason and that the average response time from it has slowed down to 2 seconds per request. In that case you'd only be able to handle 2.5 reqs/sec with 5 workers. With anymore traffic than that your end users might start to have a long wait time before any page on your app could respond, even those that don't make remote API calls, because all of your web workers might be waiting for the slow remote API to respond. As traffic continues to increase your users would start getting connection timeouts.
The idea with using Sidekiq is that you separate the time spent waiting on the external API from your web workers. You'd basically take the request for data from your user, pass it to Sidekiq, and then immediately return a response to the user that basically says "we're processing your request". Sidekiq can then pick up the job and make the external request. After it has the data it can save that data back into your application. Then you can use web sockets to push a notification to the user that the data is ready. Or even push the data directly to them and update the page accordingly. (You could also use polling to have the page continually asking "is it ready yet?", but that gets very inefficient very quickly.)
I hope this makes sense. Let me know if you have any questions.
Sidekiq, like Resque and Delayed Job, is designed to provide asynchronous job processing from a queue.
If you don't need jobs to be queued up and run asynchronously, there's no substantial benefit (or harm) to using Sidekiq.
If the tasks need to run synchronously (which it sounds like you might—it's not clear if clients are waiting for data or just requesting that jobs run), Sidekiq and its relatives are likely the wrong tool for the job. There is no guaranteed processing time when using Sidekiq or other solutions; jobs are pushed onto the end of the stack, however long that may be, and won't be processed until their turn comes up. If clients are waiting for data, they may time out long before your worker pool ever processes their jobs.

Resources