Sinatra - 504 Gateway Time-out The server didn't respond in time - timeout

I am running a sinatra application on server. Often it generates a time out error.
504 Gateway Time-out The server didn't respond in time.
And some time it works well. I could not find out the cause of this. Any one please help me.

This happened because of high load. I've created 2 more instances of this (on haproxy). And increased the time out value. Now it works fine.

Related

Puma crashing after running at 100% for a few seconds

I've deployed my ROR app at AWS EC2 instance using Nginx and Puma. Now, I have a page in app that runs lots of queries in loops(I know that's bad but we'll be improving it in some time).
Now the thing is, this page is giving 502 Gateway Timeout error resulting in crashing Puma Server. I investigated the CPU processes on server and it shows that ruby process runs at 100% CPU for few seconds and after that Puma crashes.
I'm unsure why is this happening, as the same page with same data loads on local PC in 6-7 seconds.
Is this some limit from AWS on processes?
Is this something on the Puma side?
Without further information, it's not possible to give an exact answer what's causing the problem.
As an "educated guess", I'd say it could be an out-of-memory issue.
I found the issue after multiple hours of debugging. It was a very edge case scenario putting server in an infinite loop causing memory to overflow.
I used top -i to investigate the increasing memory.
Thank you all for suggestions and responses.

Request consistently returns 502 after 1 minute while it successfully finishes in the container (application)

To preface this, I know an HTTP request that's longer than 1 minute is bad design and I'm going to look into Cloud Tasks but I do want to figure out why this issue is happening.
So as specified in the title, I have a simple API request to a Cloud Run service (fully managed) that takes longer than 1 minute which does some DB operations and generates PDFs and uploads them to GCS. When I make this request from the client (browser), it consistently gives me back a 502 response after 1 minute of waiting (presumably coming from the HTTP Load Balancer):
However when I look at the logs the request is successfully completed (in about 4 to 5 min):
I'm also getting one of these "errors" for each PDF that's being generated and uploaded to GCS, but from what I read these shouldn't really be the issue?:
To verify that it's not just some timeout issue with the application code or the browser, I put a 5 min sleep on a random API call on a local build and everything worked fine and dandy.
I have set the request timeout on Cloud Run to the maximum (15min), the max concurrency to the default 80, amount of CPU and RAM to 2 and 2GB respectively and the timeout on the Fastify (node.js) server to 15 min as well. Furthermore I went through the logs and couldn't spot an error indicating that the instance was out of memory or any other error around the time that I'm receiving the 502 error. Finally, I also followed the advice to use strace to have a more in depth look at system calls, just in case something's going very wrong there but from what I saw, everything looked fine.
In the end my suspicion is that there's some weird race condition in routing between the container and gateway/load balancer but I know next to nothing about Knative (on which Cloud Run is built) so again, it's just a hunch.
If anyone has any more ideas on why this is happening, please let me know!

How to set rails request timeout longer?

My app is built on rails and the web server is puma.
I need to load data from database and it takes more than 60 seconds to load all of them. Every time I send a get request to the server, I have to wait more than 60 seconds.
The timeout of request get is 60 seconds, so I always get 504 gateway timeout. I can't find the place to change the request timeout in puma configuration.
How can I set the request timeout longer than 60 seconds?
Thanks!
UPDATE: Apparently worker_timeout is NOT the answer, as it relates to the whole process hanging, not just an individual request. So it seems to be something Puma doesn't support, and the developers are expecting you to implement it with whatever is fronting Puma, such as Nginx.
ORIGINAL: Rails itself doesn't time out, but use worker_timeout in config/puma.rb if you're running Puma. Example:
worker_timeout (246060) if ENV['RAILS_ENV']=='development'
Source
The 504 error here is with the gateway in front of the rails server, for example it could be Cloudflare, or nginx etc.
So the settings would be there. You'd have to increase the timeout there, as well as in rails/puma.
Preferably, you should be optimizing your code and queries to respond in faster duration of time so that in production environment there is no bottleneck with large traffic coming on your application.
If you really want to increase the response time then you can use rack timeout to do this:
https://github.com/kch/rack-timeout

Meteor reports 500 errors randomly

I am running Meteor 1.2.1 but this issue has occurred on 1.1 as well. It seems to happen pretty randomly. I tend to notice it if I take focus off the window that I start seeing them appear more regularly. This is the error that I see:
sockjs-0.3.4.js:854 POST http://blah.something.com/sockjs/770/bh33bcip/xhr 500 (Internal Server Error)
AbstractXHRObject._start # sockjs-0.3.4.js:854
(anonymous function) # sockjs-0.3.4.js:881
I recently installed natestrauser:connection-banner which pops a banner at the top when Meteor.connection.status().status is anything other than "connected". Since I installed it, this pops up every time I see the 500 error. The 500 error seems to kick it into "waiting" status. It reconnects eventually, but it's a rather annoying error.
I don't see anything on the server side whatsoever, nor on the client side. Does anyone have ideas on how to debug this, or why I'm getting this error?
A picture is included here:
http://imgur.com/EtTowR4
I figured out the problem! I use pound as a reverse proxy and the default installation has a very short timeout. I changed that timeout from 15 seconds to 60 seconds and the 500 errors disappeared. I don't know if this is because pound's keep alive was set to 30 (which would likely not keep anything alive given that the timeout was 15 seconds), or if its because the Meteor client doesn't check in more frequently than 15 seconds. Perhaps someone can chime in as to why this?
Either way, beware of your reverse proxy settings with Meteor!

Neo4j Server: How to set connection timeouts

How do set - in my case raise - the connection timeouts of the Neo4j server? I have a server extension to which I POST data, sometimes that much that the extension is running for a couple of minutes. But after 200 seconds, the connection is dropped by the server. I think I have to raise the max idle time of the embedded jetty - but I don't know how to do that since it's all configured within the Neo4j Server code.
Edit: I've tried both Neo4j 1.8.2 and 1.9.RC2 with the same result.
Edit2: Added the "embedded-jetty" tag because there are no answers until now; perhaps the question can be answered by someone with knowledge about embedded Jetty since Neo4j uses an embedded Jetty.
Thank you!
I still don't know if there is a solution in the Neo4j server with versions <2.0. However, with switching to 2.0.0 and above, this issue was gone for my case.
The server guards against orphaned transactions by using a timeout. If there are no requests for a given transaction within the timeout period, the server will roll it back. You can configure the timeout period by setting the following property to the number of seconds before timeout. The default timeout is 60 seconds.
org.neo4j.server.transaction.timeout=60
See http://docs.neo4j.org/chunked/stable/server-configuration.html

Resources