I have deployed a rails app with Nginx and puma web server and Sometimes I get following error.
2018/12/13 12:07:04 [info] 25621#0: *156784 client timed out (110: Connection timed out) while waiting for request, client: 10.66.20.55, server: 0.0.0
.0:80
Can you please tell me what is the meaning of this error. Is the puma server is buzy? or nginx is buzy?
As you can see, it is not an error, just [info], so don't worry about it.
Looks like client didn't send any request during keepalive_timeout, so connection has been closed.
You should warn about [error] entries, because errors occur if application really not accessible.
Related
Occasionally I receive a connection timeout when calling the /userinfo endpoint of my KeyCloak-Server.
So far, I have no indication what's wrong and what causes the timeouts. There are no errors in the server.log I configured. Also, I cannot reproduce the issue, I just see the errors in the logs of the application trying to authenticate with keycloak.
Is there some sort of connection limit that my keycloak might use?
List item
What additional logs can I activate to narrow down the problem?
I am currently on version 17.0.1
Try running keycloak in debug mode kc.sh start --log-level=debug If the /userinfo call reached the keycloak then there will be a debug log for that, you can match the time when error occurred to the keycloak log.
Do you have any other components in between your application and keycloak such as proxy, a DNS server etc ? You would need to check their logs as well.
Also check out this document regarding rest api in keycloak -> https://github.com/keycloak/keycloak-community/blob/main/design/rest-api-guideline.md#rate-lmiting
For context, I've already tried to implement many answers regarding nginx (such as this one) to no success. This appears to be an issue related to nginx that isn't explored in other answers.
Using Rails 5.2 and Docker on a multicontainer EBS, WebSocket connections are failing. This failure only occurs when deployed and the same configuration works as expected locally.
I am using the Postgres adaptor with ActionCable
Problem Description:
My website is deployed to EBS and displays information just fine. My Postgres instance is connected and working as expected. However, these errors occur on WebSocket enabled pages:
In browser:
When I go to my SSL enabled site, a notice appears in the JS console:
The connection to wss://redacted.com/cable was interrupted while the page was loading.
Firefox can’t establish a connection to the server at wss://redacted.com/cable.
(This message repeats every ~5s)
Rails server logs:
Failed to upgrade to WebSocket (REQUEST_METHOD: GET, HTTP_CONNECTION: keep-alive, HTTP_UPGRADE: )
Finished "/cable/"[non-WebSocket] for <redacted IP> at 2018-02-28 02:49:03 +0000
(This message repeats every ~5s)
Prior research
Error codes:
Others in this situation note that their console throws a 400 - Bad handshake error (or similar). This is not the case here.
nginx (important):
As well, others seem to think that nginx is to blame. However, I cannot access nginx within my EBS instance. Any call for nginx results in service nginx not found and any modification of an nginx file via .ebextensions fails.
It's worth noting that I have no nginx Docker image or configuration whatsoever. I'm trying to modify EBS' built-in nginx configuration. To get to this point (where my website loads but WebSockets don't work), I have not configured nginx at all.
Troubleshooting steps taken
Changed EBS load balancer from HTTP to TCP
Modification of nginx (see above)
Specifying the ActionCable server URL in my production.rb
Specifying allowed WebSocket hosts in production.rb
Changing cable.yml adapters and hosts
We have Ruby on Rails application, that is running on VPS. This night the nginx went down and responded with "502 Bad Gateway". Nginx error log contained lots of folowing messages:
2013/10/02 00:01:47 [error] 1136#0: *1 connect() to
unix:/app_directory/shared/sockets/unicorn.sock failed (111:
Connection refused) while connecting to upstream, client:
5.10.83.46, server: www.website.com, request: "GET /resource/206 HTTP/1.1", upstream:
"http://unix:/app_directory/shared/sockets/unicorn.sock:/resource/206",
host: "www.website.com"
These errors started suddenly, because previous error messages was 5 days earlier.
So the problem was in unicorn server. Then i opened unicorn error log and found there just some info messages, which doesn't connected with a problem. Production log was useless too.
I tried to restart server via service nginx restart, but it didn't help. Also there were not some pending processes of unicorn.
The problem was solved when i redeploy the application. And it is strange, because i deployed the same version of application 10 hours before server went down.
I'm looking for any suggestions how to prevent such 'magic' cases in future. Appreciate any help you can provide!
Looks like your unicorn server wasn't running when nginx tried to access it.
This can be caused by VPS restart, some exception in unicorn process, or killing of unicorn process due to low free memory. (IMHO VPS restart is the most possible reason)
Check unicorn by
ps aux | grep unicorn
Also you can check server uptime with
uptime
Then you can:
add script that would start unicorn on VPS boot
add it as service
run some monitoring process (like monit)
While doing a load test I found passenger throwing below error at first when lots of concurrent requests hit server. And, client side it gives 502 error code. However, after some requests say 1000- 2000 requests its works fine.
2013/07/23 11:22:46 [error] 14131#0: *50226 connect() to /tmp/passenger.1.0.14107/generation-
0/request failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.251.18.167, server: 10.*, request: "GET /home HTTP/1.0", upstream: "passenger:/tmp/passenger.1.0.14107/generation-0/request:", host: hostname
Server Details.
Passenger 4.0.10
ruby 1.9.3/2.0
Server Ec2 m1.xlarge
64-bit 4core 15gb
Ubuntu 12:24 LTS
Its a web server which servers dynamic webpages for rails framework
Can somebody suggest what the issue might be?
A "temporarily unavailable" error in that context means the socket backlog is full. That can happen if your app cannot handle your requests fast enough. What happens is that the queue grows and grows, until it's full, and then you start getting those errors. In the mean time your users' response times grow and grow until they get an error. This is probably an application-level problem so it's best to try starting there. Try figuring out why your app is slow, at which request it is slow, and fix that. Or maybe you need to scale to more servers.
I am making load test for app I am working on, on thin configuration behing nginx I get
connect() failed (111: Connection refused) while connecting to upstream
when I send much more request my configuration can handle. On puma I get only 504 timeout.
Why thin refuses connection on high load?
In your thin config there is a parameter called max_conns: <num connection> and when there are more connections incoming then specified thin refuses new connections.