I'm using haproxy for load balance, it works very happily.
I setup a statistics page, this page return "the connection was reset" while I refresh it sometimes.
listen status 0.0.0.0:8080
stats enable
stats refresh 5s
stats uri /admin
Is this a bug or there's some configurition problem?
thanks!
first a "crash" would mean the process dies which is not the case. What is happening here (but you should have warnings when you start the process) is that the stats page is defined in a TCP listener instead of an HTTP one. So you need to add :
mode http
for it to work.
Also you should get other warnings about timeouts etc... Please fix warnings before asking for help, as generally they report the cause of the problem you're facing.
IN addition, this is the defaults section in my config file(if it can help):
defaults
log global
mode http
option httplog
option dontlognull
option nolinger
option redispatch
retries 3
maxconn 50000
contimeout 15s
clitimeout 15s
srvtimeout 15s
Related
I have a Prometheus service running in a docker container and we have a group of servers that are rotating reporting up and down with the error "context deadline exceeded".
Our time interval is 15 seconds and timeout is 10 second.
The servers have been polled with no issues for months, no new changes have been identified. At first I suspected a networking issues but I have triple checked the entire path and all containers and everything is okay. I have even tcpdumped on the destination server and Prometheus polling server and can see the connections establish and complete, yet still being reported as down.
Can anyone tell me where I can find logs relating to "content deadline exceeded"? Is there any additional information I can find on what is causing this?
From other thread it seems like this is a timeout issue, but the servers are a subsecond away and again there is no packetloss occurring anywhere.
Thanks for any help.
We are investigating an issue on a deployed cloud run service, where requests made to the service occasionnaly fail with a StatusCodeError: 500, while no log of said requests appear on cloud run.
Served requests usually produce two log lines detailing the request, route and exit code (POST 200 on https://service-name.a.run.app/route/...)
One with log name projects/XXX/logs/run.googleapis.com/stdout is produced by our application to log the serving of every request
One with log name projects/XXX/logs/run.googleapis.com/requests is automatically produced by cloud run on every request
When the incident occurs, none of those are logged. The client (running in a gke pod in the same project) has the only log of the failing requests, with the following message:
StatusCodeError: 500 - "\n<html><head>\n<meta http-equiv=\"content-type\" content=\"text/html;charset=utf-8\">\n<title>500 Server Error</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Server Error</h1>\n<h2>The server encountered an error and could not complete your request.<p>Please try again in 30 seconds.</h2>\n<h2></h2>\n</body></html>\n"
Rough timeline of the last incident:
14:41 - Service is serving requests as expected, producing both log lines each time
14:44 to 14:56 - Cloud run logs are empty, every request made to the service (~30) gets the 500 error message
14:56 - Cloud run terminates the currently running container instance, (as happens after some inactivity for instance), which is correctly logged by the application ([INFO] Handling signal: term)
14:58 - Cloud run instantiates a new container instance and starts serving incoming requests (which are logged normally)
The absence of logs during the incident makes it hard to investigate its cause, and at this stage we would be gratefull for any kind of lead.
Our service has another known issue, that may or may not be related. The service is designed to avoid multiple replicas, as a single one should be able to handle the load and serve concurrent requests (cloud run concurency = 80), but has a relatively long cold start time (~30s). This leads to 429 errors when a spike of requests comes while no replica is available (because of cloud run hard capping concurrency to 1 during cold start). This issue was somewhat mitigated by allowing some replication (currently maxScale = 3), since each replica can put a request on hold during the cold start, but will require some work on the client side to handle correctly (simple retries after the cold start).
I have found this PIT that describes the aforementioned behavior. It seems to happen because a part of Cloud Run thinks that there are already provisioned instances handling the traffic but there aren't. This issue is currently being worked on internally but there's no ETA for a fix at the moment.
The current workaround is to set a maximum number of instances to at least 4.
My question is related to the question already posted here
Its indicated in the original post that the timeout happens about once a month. In our setup we are receiving this once every 10 seconds. Our production logs are filled with this handshake exception messages. Would setting the timeout value for handshake apply to our scenario as well?
Yes. Setting handshake-timeout=0 on the relevant acceptor URL in your broker.xml applies here even with the higher volume of timeouts.
We've been running Google Cloud Run for a little over a month now and noticed that we periodically have cloud run instances that simply fail with:
The request failed because the HTTP connection to the instance had an error.
This message is nearly always* proceeded by the following message (those are the only messages in the log):
This request caused a new container instance to be started and may thus take longer and use more CPU than a typical request.
* I cannot find, nor recall, a case where that isn't true, but I have not done an exhaustive search.
A few things that may be of importance:
Our concurrency level is set to 1 because our requests can take up to the maximum amount of memory available, 2GB.
We have received errors that we've exceeded the maximum memory, but we've dialed back our usage to obviate that issue.
This message appears to occur shortly after 30 seconds (e.g., 32, 35) and our timeout is set to 75 seconds.
In my case, this error was always thrown after 120 seconds from receiving the request. I figured out the issue that Node 12 default request timeout is 120 seconds. So If you are using Node server you either can change the default timeout or update Node version to 13 as they removed the default timeout https://github.com/nodejs/node/pull/27558.
If your logs didn't catch anything useful, most probably the instance crashes because you run heavy CPU tasks. A mention about this can be found on the Google Issue Tracker:
A common cause for 503 errors on Cloud Run would be when requests use
a lot of CPU and as the container is out of resources it is unable to
process some requests
For me the issue got resolved by upgrading node "FROM node:13.10.1 AS build" to "FROM node:14.10.1 AS build" in docker file it got resolved by upgarding the node.
I follow these links for configuration
https://devcenter.heroku.com/articles/rails-unicorn
http://www.neilmiddleton.com/getting-more-from-your-heroku-dynos/
my config/unicorn.rb:
worker_processes 2
timeout 60
With this config, it still gives a timeout error after 30sec.
The Heroku router will timeout all requests at 30 seconds. You cannot reconfigure this.
See https://devcenter.heroku.com/articles/request-timeout
It is considered a good idea to set the application level timeouts to a lower value than the hard 30 second limit so that you don't leave dynos processing requests that the router has already timed out.
If you have requests that are regularly taking longer than 30 seconds you may need to push some of the work involved onto a background worker process.