I have tried to use Rabbitmq server for some reason the connection closes abruptly even though I passed the correct username and password.
Rabbitmq server is running on port 5672 and telneting to my server at port 5672 says its running fine.
I have installed rabbitmq server in CentOS and my rabbitmq server log are as follows:
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
accepted TCP connection on [::]:5672 from <host>:42048
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
starting TCP connection <0.357.0> from <host>:42048
=WARNING REPORT==== 19-Dec-2012::06:25:44 ===
exception on TCP connection <0.357.0> from <host>:42048
connection_closed_abruptly
=INFO REPORT==== 19-Dec-2012::06:25:44 ===
closing TCP connection <0.357.0> from <host>:42048
What might be the possible reasons for this to happen.
Thanks
connection_closed_abruptly means the client closed the TCP connection without going through the proper AMQP connection termination process.
Is your rabbit server behind a load balancer? A common cause for connections being abruptly closed as soon as they're started is a TCP load balancer's heartbeat. If this is the case you should see these messages at very regular intervals, and the generally accepted practice seems to be to ignore them. To avoid log file buildup you could also consider raising the log level to "error".
On the other hand, if your client connects to the rabbitmq server directly, this probably means your client does not close the connection in an AMQP-approved way. You could try a different client to confirm whether this is the case.
Btw, telnetting to your server is likely to cause abrupt closings too. :)
Check your connection limit
Your connection is time short, there is usually caused by your client improper use
Related
I'm running 2 docker containers(TcpServer,TcpClient). In each container, there's an init.sh script which launches the applications. In the init.sh script, I've handled SIGTERM but I'm not doing any sort of handling for that(I'm not passing it to my application).
trap 'true' SIGTERM
After startup, a tcp connection is established between TcpServer and TcpClient.
TcpClient is a multi-threaded application, with 1 thread(receiver) doing:
while(true) {
//blocking tcp receive function call
//process received data or received error code.
}
So basically, the idea is that the receiver thread would always get to know about server going down 'cleanly'.
The observation is that, most of the times, when I issue 'docker stop serverContainer', the client application receives TCP 'FIN' packet after about 10 seconds. This is as per my expectations because docker first tries to kill via SIGTERM but since that is handled it then issues SIGKILL which it does only after about 10 seconds.
My current understanding is that, whenever sigkill/unhandled-sigterm is given to a process, the kernel will terminate that process and close all file descriptors opened by that process. If this is true, then I should always see a FIN packet going from server to client as soon as the process is killed.
However, a few times, FIN packet is not observed in the traces captured on both client and server end. As a result, the client doesn't get to know about the server getting down for a longer time(until it tries to send some data on that connection or the TCP's keepalive mechanism kicks in).
I'm not sure how this happens because if I explicitly issue SIGKILL to pid 1 of my server's container(from outside), then I've always seen the FIN packet. So why sometimes, and only when using docker stop, does server not send TCP FIN?
Basically I want to ask 2 things:
In Linux, when SIGKILL is issued to a TCP server process, is it guaranteed that the kernel/tcp stack will send TCP FIN packet to client before terminating?
When I use 'docker stop' how exactly are the processes spawned by the main process(PID 1 inside container) terminated? Because from what I had read, the SIGTERM/SIGKILL is given only to PID 1? So why are its child processes not adopted by init/systemd as happens otherwise(killing the parent process created outside the container).
Operating System: Red Hat Enterprise Linux Server release 7.7 (Maipo)
Docker version: Docker version 19.03.14, build 5eb3275d40
I wonder if others noticed this issue with the WSL2 Debian implementation of TCP.
I am connecting from a Docker container running WSL2 Debian v. 20
The TCP client sends a Keep-Alive packet every second which is kind of overkill. Then after roughly 5 minutes, the client terminates the connection without any reason. Is anybody seeing this behavior?
You can reproduce this by just opening a telnet session to another host. But the behavior happens on other types of sockets too.
And before you ask, this issue is not caused by the server, it does not occur when opening the same tcp connection from other hosts.
wireshark dump of the last few seconds of the idle TCP connection
I had the same problem with Ubuntu on WSL2. An outbound ssh connection closed after a period of time if there was no activity on that connection. Particularly anoying if you were running an application that produced no screen output.
I suspect that the internal router that connects wsl to the local network dropped the idle TCP connection.
The solution was to shorten the TCP keep-alive timers in /proc/sys/net/ipv4, the following worked for me:
echo 300 > /proc/sys/net/tcp_keepalive_time
echo 45 > /proc/sys/net/tcp_keepalive_intvl
So I figured this out. Unfortunately, the WSL2 implementation of Debian seems to have this hardcoded in the stack. I tried to change the parameters of the socket open call and they didn't cause a change in the behavior.
I'm getting the below warning in the console when navigating to my Shopify Rails app:
WebSocket connection to 'wss://argus.chi.shopify.io' failed: WebSocket is closed before the connection is established.
Can someone explain what this means and why it might be happening?
The host argus.chi.shopify.io did not respond when I did ping it. Check whether the host is running or not.
Actually i am using rabbit mq server and mosquitto mqtt client for the connection
but after sometime server is disconnecting from the client
and the error in not traceable.
You should check the rabbitmq log (and log configuration) and also test with different keep alive values.
Hi I'm receiving an SSL issue on my rails app on my local server:
An error occurred during a connection to localhost:3000.
SSL received a record that exceeded the maximum permissible length.
(Error code: ssl_error_rx_record_too_long)
and was told that opening port 443 would fix the problem. I've looked around everywhere but there doesn't seem to be an answer. Could someone go through the process of how I would open port 443?
So, someone is advising you to open that port in your firewall. I don't think this will help, since you're connecting to port 3000, but it might be that you're running a proxy on port 3000 that needs to connect to port 443.
Instead of opening that port I would suggest disabling your firewall and retrying that. If it works, then you should look into making an exception for that port. Here's how to disable the firewall: (Which will open that port)
Open System Preferences->Security->Firewall and click "Stop". Don't forget to turn it back on when you're done testing.