Kill off connections that are not consuming Rabbitmq - ios

I am having a bit of an issue where if the client crashes or if it loses connectivity it does not properly close the connection. This results in several idle connections that keep adding up. The client I am using rabbitmq-c does not support heartbeats yet therefore I cannot use that as a possible solution.
Is there anyway to kill off any connections that are not at the same time consuming a queue?

I found out the client was not setting a timeout time to the server therefore the connections were always being kept open. I also updated the client to support Heartbeats so there is now two ways the connections can be killed instead of being kept open.

Related

Guacamole Disconnect Session on User Idle

I have google'ed a lot on this question but haven't found an answer. I have a Guacamole server that connects to a local VNC session and I would like for it to disconnect the user session if it detects no activity for an hour.
What I've tried and attempted
used xprintidle to show idle time of the user, this works so will be put in a script that might be used later to terminate an idle session.
I have looked at Guacamole's api-session-timeout and set it for a minute, I had hoped it would not reconnect to the same session once the VNC server stops abruptly. It seems this did not work.
I have tried to use guacamole's user-mappings.xml to specify the parameter "autoretry" and set that to "1". Restarted and this did not work, I'm thinking its not used this way.
I have went to the guacamole postgres database and manually inserted the entry for autoretry into the parameter table. Restarted, but this did not work.
I have went to the VNC side (I use TurboVNC) and looked at the flag -idletimeout and configuration option max-idle-timeout. This terminates the TurboVNC service when there are no active connections. It is not what I am after, I'm trying to only disconnect the session when the user is idle.
I figured that the VNC side would not work because even if the VNC session is terminated Guacamole would keep on retying forever to reconnect to that session.
From some posts on the Guacamole mailing list it seems that disabling auto reconnect is not possible without a recompile from source.
Is there a way to disconnect an active session after an idle timeout period? or maybe a way to stop Guacamole from reconnecting?

Hybrid Connection keeps on disconnecting

Seems like our hybrid connection to the server has been constantly disconnecting every month or two. What we usually do is disconnect/connect the connection to fix it. Do hybrid connections have auto-reconnect when this happens? And quite wondering since it's been constantly disconnecting every a month or two, would like to know where is the main problem happening.
Does activity logging in Azure provides an exception logging?

Twilio WebRTC TURN relay randomly stops working after a few minutes

I am using the Twilio Network Traversal Service as part of a native application I am working on to perform peer-to-peer remote desktop connections. We implement a subset of the WebRTC protocol stack that is equivalent to the WebRTC data channels (not the WebRTC video and audio protocols). When using a TURN relay, the TURN allocation seems to be invalidated randomly somewhere between a few minutes and a maximum of 12 minutes from the session start. This issue looks very similar to this one, but the proposed workaround (sending silent audio) is not acceptable in my case, since I do not implement the WebRTC audio/video protocols.
I have been pulling my hair on this problem for the last two weeks, and isolated the issue as being the Twilio service itself. To compare, I have used a web based WebRTC data channel demo using firefox and the Xirsys TURN server cloud. I have wireshark captures showing firefox getting disconnected with Twilio just like my native application, while the exact same firefox demo doesn't get disconnected when using the Xirsys servers.
I was using Xirsys originally, but I experienced some instability with their service that made me switch to Twilio, which is why I would rather have Twilio fix this issue instead of going back with Xirsys. At the bare minimum, I would rather have two WebRTC hosting providers I can choose from that I know should work fine. This is why I am taking the time to explain the issue in detail so it can get fixed.
Here are two wireshark captures (with the peer-to-peer data messages filtered out) showing firefox using WebRTC data channels and the Twilio TURN relay servers:
The traffic stops being relayed after 4 minutes in the first capture, and after about 11 minutes in the second capture. In both captures, firefox detects that traffic stops being relayed (at the data channel level) and attempts a graceful disconnection by sending a Refresh request packet with a lifetime of zero. Both graceful disconnections result in a 437 Allocation Mismatch error, indicating that the server doesn't even know about the allocation firefox is trying to close gracefully.
With my native application, this would often take the form of a CreatePermission Request message that fails with a 438 "Wrong nonce" error, which is basically what should happen if a client tries to update the permission on an allocation that no longer exists. The error code 438 usually means "Stale nonce", which is not really an error, but an indication that the nonce has expired and the client should try again using the new nonce contained in the "error" message. It took me a while to figure out, but even if the error code is 438, the error string is not the same. I have observed a true stale nonce error with Xirsys and successfully updated my permission with the new nonce from the error response, so I know I can properly handle this case in my implementation.
Here is the source code for the WebRTC data channel demo I have used:
https://github.com/devolutions/webrtc-demo
For comparison, here is the same firefox data channel demo using the Xirsys TURN server cloud:
In this capture, I have let the demo run for about 16 minutes (it works for much longer than that, the longest I have tried is two hours). We can see that the traffic keeps getting relayed for the entire duration of the session, and CreatePermission requests keep getting sent by firefox with success. At the end, the graceful disconnection is triggered by firefox closing the WebRTC data channel (instead of being closed due to traffic no longer being relayed). As opposed to the Twilio captures, the Refresh request with a lifetime of zero is successful: the Xirsys TURN server still knows about the allocation and sends back a success response, as expected.
It should be noted that the ICMP unreachable errors are normal because I think in this case firefox is no longer listening on the given port when the response comes back. In other words, it sends the Refresh request with a lifetime of zero and doesn't wait for the answer to come back.
For the time being, I have no other choice but to go back with Xirsys, but I would really like if the Twilio Network Traversal Service could be fixed. Let me know if you have more questions regarding the issue.
I have uploaded the wireshark captures here for reference.
EDIT: I have modified the webrtc demo page such that it doesn't close the connection when the ice connection state is set to 'disconnected'. Now I get the real disconnection when the ice connection state goes to 'failed'. However, it effectively didn't change anything, since in this case it takes just a few seconds more for the state to go from 'disconnected' to 'failed'.
Since I have new relevant screenshots and captures, I am updating the original question to clarify certain problems pointed out by Philipp Hancke:
First, here is a new capture with the ice connection state fix (the browser closes the connection only when the state goes to 'failed'):
It's interesting to see that this time, the session stayed up for a whole 18 minutes. This was taken on a saturday morning, so I'm guessing that the issue could be related to the current workload on the twilio servers. However, it failed in the exact same way as it always does so far for me. As a bonus, we even have a valid stale nonce response that is correctly handled by firefox.
However, if we take a different view of the same capture, we can see that the traffic stops being relayed for a solid 30 seconds before firefox considers the connection as being dropped and sends the Refresh request with a lifetime of zero. As in previous captures, the server responds with an Allocation Mismatch error, indicating it doesn't know which allocation firefox is talking about.
The last eight packets being sent are of the same size, so my guess is that they are retransmissions. After 30 seconds of retransmissions, it is likely that SCTP considers the transport as being dropped.
With regards to the refresh request with a lifetime of zero, I did a test where I close the connection early on, from the browser. In this case, the server recognizes the allocation and returns a success response:
The allocation mismatch is the easiest symptom to observe, but in my testing with my native application, I have seen similar errors with Refresh requests for non-zero lifetimes, and with CreatePermission requests (438 "Wrong nonce" error). However, since the browser closes the connection after 30 seconds of data not being relayed, it is hard to observe these errors with the current webrtc demo. If we could change that timeout to 10 minutes, we would see those errors as well.
Excellent problem description!
Without the server log this is hard to determine what goes wrong. I tried with the appear.in TURN servers which run an up-to-date version of coturn and show the same behaviour as the Twilio servers. Xirsys seems to be running a custom version of coturn (Coturn-0.5 'Xirsys Turn Services' from the software field but coturn never had such a version).
In both captures, firefox detects that traffic stops being relayed (at the data channel level) and attempts a graceful disconnection by sending a Refresh request packet with a lifetime of zero.
Not quite. A refresh request with a lifetime of 0 is used to discard an allocation. At that point it does not matter what the server returns as the connection is beyond repair anyway.
This is caused by peerjs closing the peerconnection if the iceconnectionstate changes to disconnected, here in your bundled library version.
This is overly aggressive (and does not even fix things) and we've had a discussion about what the specification should do wrt to trying to fix things with an ice restart here which also links to a great explanation of the disconnected state.
The disconnected state probably happens because a few packets get lost. But this is something that can happen when there is minor congestion. I'd recommend removing the pc.close() in the disconnected case.
If you are looking for other TURN providers, Tokbox provides the same service. For datachannels the latency of a properly run distributed TURN network does not matter as much as for VoIP so you might run your own servers in a single location instead.

Socket.io client fails to reconnect to server after an abrupt disconnection

I am using the socket.io NodeJs server library and the Swift client library. Majority of the time the client successfully reconnects to the server after a disconnection, however intermittently we are seeing abrupt disconnections and then the client is never able to reconnect.
In the server logs, I see the client sending a connection attempt at the defined re-try interval, however it just never successfully establishes the connection and then we get a ping timeout.
There is surprisingly very little support for Socket.io which makes this extremely difficult to solve.
I figured out a solution to our problem by forcing a new engine to be created in the client upon reconnections. When creating the SocketIOClient object, set the forceNew variable to true which allows the client to create a new engine and thus always successfully establishes the connection.
return SocketIOClient(socketURL: socketURL, config: [.forceNew(true)])

Sometimes socket dies when switching wifi to 3g iOS posix sockets

I've searched the web extensively but haven't found a good answer to this.
Im writing a socket based application in C++ using posix sockets on iOS/Android.
When switching from wifi->3g, SOMETIMES the socket goes dead without giving any errors when reading/writing.
I can use the reachability API on iOS (and similar on android) to detect when the network switches.
I am destroying/recreating the socket when this occurs. The problem is if the socket is alive, the server will receive the signal when I close the socket. If the server receives the close signal, it will assume the client disconnected intentionally and notify others about this, which is not what I want. If the socket is dead, the server doesn't receive this signal and everything is OK.
How do other people handle this scenario? I really don't want to use a timeout to detect this.
Why does it only sometimes die too? And how can I tell the socket is actually dead?
Just to close this issue, this is the approach I'm taking.
When switching networks, I'm sending a ping-and-reconnect packet to the server, AND creating a new socket.
Which ever responds first, I close the other connection.
Required a bit of server side changes to handle this correctly too

Resources