I am making load test for app I am working on, on thin configuration behing nginx I get
connect() failed (111: Connection refused) while connecting to upstream
when I send much more request my configuration can handle. On puma I get only 504 timeout.
Why thin refuses connection on high load?
In your thin config there is a parameter called max_conns: <num connection> and when there are more connections incoming then specified thin refuses new connections.
Related
I have deployed a rails app with Nginx and puma web server and Sometimes I get following error.
2018/12/13 12:07:04 [info] 25621#0: *156784 client timed out (110: Connection timed out) while waiting for request, client: 10.66.20.55, server: 0.0.0
.0:80
Can you please tell me what is the meaning of this error. Is the puma server is buzy? or nginx is buzy?
As you can see, it is not an error, just [info], so don't worry about it.
Looks like client didn't send any request during keepalive_timeout, so connection has been closed.
You should warn about [error] entries, because errors occur if application really not accessible.
I am using capistrano 2.15.5 for my rails application deployment. I am using localhost for server and have also tried with 127.0.0.1 in place of localhost. After running *cap production deploy:setup* the error that i am getting is: **Errno::ECONNREFUSED: Connection refused - connect(2)**.
After searching for it i found out ECONNREFUSED means the client couldn't make a TCP connection to the server, either because it's down, or its DNS is not resolving..
how to fix this issue?
thanks.
You have to add your ssh key in server's ssh authorize keys
We have Ruby on Rails application, that is running on VPS. This night the nginx went down and responded with "502 Bad Gateway". Nginx error log contained lots of folowing messages:
2013/10/02 00:01:47 [error] 1136#0: *1 connect() to
unix:/app_directory/shared/sockets/unicorn.sock failed (111:
Connection refused) while connecting to upstream, client:
5.10.83.46, server: www.website.com, request: "GET /resource/206 HTTP/1.1", upstream:
"http://unix:/app_directory/shared/sockets/unicorn.sock:/resource/206",
host: "www.website.com"
These errors started suddenly, because previous error messages was 5 days earlier.
So the problem was in unicorn server. Then i opened unicorn error log and found there just some info messages, which doesn't connected with a problem. Production log was useless too.
I tried to restart server via service nginx restart, but it didn't help. Also there were not some pending processes of unicorn.
The problem was solved when i redeploy the application. And it is strange, because i deployed the same version of application 10 hours before server went down.
I'm looking for any suggestions how to prevent such 'magic' cases in future. Appreciate any help you can provide!
Looks like your unicorn server wasn't running when nginx tried to access it.
This can be caused by VPS restart, some exception in unicorn process, or killing of unicorn process due to low free memory. (IMHO VPS restart is the most possible reason)
Check unicorn by
ps aux | grep unicorn
Also you can check server uptime with
uptime
Then you can:
add script that would start unicorn on VPS boot
add it as service
run some monitoring process (like monit)
While doing a load test I found passenger throwing below error at first when lots of concurrent requests hit server. And, client side it gives 502 error code. However, after some requests say 1000- 2000 requests its works fine.
2013/07/23 11:22:46 [error] 14131#0: *50226 connect() to /tmp/passenger.1.0.14107/generation-
0/request failed (11: Resource temporarily unavailable) while connecting to upstream, client: 10.251.18.167, server: 10.*, request: "GET /home HTTP/1.0", upstream: "passenger:/tmp/passenger.1.0.14107/generation-0/request:", host: hostname
Server Details.
Passenger 4.0.10
ruby 1.9.3/2.0
Server Ec2 m1.xlarge
64-bit 4core 15gb
Ubuntu 12:24 LTS
Its a web server which servers dynamic webpages for rails framework
Can somebody suggest what the issue might be?
A "temporarily unavailable" error in that context means the socket backlog is full. That can happen if your app cannot handle your requests fast enough. What happens is that the queue grows and grows, until it's full, and then you start getting those errors. In the mean time your users' response times grow and grow until they get an error. This is probably an application-level problem so it's best to try starting there. Try figuring out why your app is slow, at which request it is slow, and fix that. Or maybe you need to scale to more servers.
I am running ThreadPool rainbows + nginx (unix socket)
On large file uploads I am getting the following in nginx error log (nothing in the application log):
readv() failed (104: Connection reset by peer) while reading upstream
The browser receives response:
413 Request Entity Too Large
Why does this happen?
"client_max_body_size 80M;" is set both on http and server level (just in case) in nginx
nginx communicates with rainbows over a unix socket (upstream socket + location # proxy_pass)
I don't see anything in the other logs. I have checked:
rainbows log
foreman log
application log
dmesg and /var/log/messages
This happens when uploading a file ~> 1 MB size
The ECONNRESET (Connection reset by peer) error means that connection was uncleanly closed by a backend application. This usually happens if backend application dies, e.g. due to segmentation fault, or killed by the OOM killer. To find out exact reason you have to examine your backend logs (if any) and/or system logs.
Maybe you have client_max_body_size set into your nginx.conf that limits the size of the body to 1Mb, e.g.
client_max_body_size 1M;
In this case you'd need to remove it to allow uploading files of more than 1M.
Turns out Rainbows had a configuration option called client_max_body_size that defaulted to 1 MB
The option is documented here
If this options is on, Rainbows will 413 to large requests silently. You might not know it's breaking unless you run something in front of it.
Rainbows! do
# let nginx handle max body size
client_max_body_size nil
end