Socket.io client fails to reconnect to server after an abrupt disconnection - ios

I am using the socket.io NodeJs server library and the Swift client library. Majority of the time the client successfully reconnects to the server after a disconnection, however intermittently we are seeing abrupt disconnections and then the client is never able to reconnect.
In the server logs, I see the client sending a connection attempt at the defined re-try interval, however it just never successfully establishes the connection and then we get a ping timeout.
There is surprisingly very little support for Socket.io which makes this extremely difficult to solve.

I figured out a solution to our problem by forcing a new engine to be created in the client upon reconnections. When creating the SocketIOClient object, set the forceNew variable to true which allows the client to create a new engine and thus always successfully establishes the connection.
return SocketIOClient(socketURL: socketURL, config: [.forceNew(true)])

Related

Gsm800L gprs disconnect issue

I need to decide which gsm800l module to use for my application. My application is to communicate to a server continuously by sending and receiving a string but needs to run continuously.
As of now I’m using the sim800l module, it works fine and connects to the server but after a couple of minutes it disconnects and probably never connects again I need to manually reset.
How can I resolve this issue?

keep loging missing server connection for kctconnectioninvalidatednotification

When my project running face recognition module will continue to output:
Missing server connection for kCTConnectionInvalidatedNotification.
What could be causing the issue? how can I resolve it?
If you used some socket connection , you have to pay attention:
To send heart bit package periodically to your server.
When client loses client, you have to reconnect to your server (unless you don't want it).

Wildfly HornetQ Remote HTTP connection memory leak

I am running Wildfly 8.2 instance with HornetQ messaging remotely accessible via HTTPS on port 8185.
For testing the connection I am running a client on the same machine connecting via https-remoting://localhost:8185
from client view everything works fine: connecting, sending / receiving messages and closing connection
on server side at first all works fine, too. However, after period set in "connection-ttl" of RemoteConnectionFactory has passed, server logs following lines:
2015-09-03 17:05:49,152 WARN [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212037: Connection failure has been detected: HQ119014: Did not receive data from /192.168.160.83:63937. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
2015-09-03 17:05:49,154 INFO [org.hornetq.core.server] (hornetq-failure-check-thread) HQ221021: failed to remove connection
Final result after testing for a longer time (every 1-2 seconds clients are connecting, sending / receiving messages, closing the connection):
Wildfly consumes more and more heap memory and finally stops working with an OutOfMemoryError ...
As mentioned, the connections are always explicitly closed by client, and at closing time no error is logged, neither on client nor on server side. It seems that the "hornetq-failure-check-thread" just didn't get informed that the connection was already closed
Any help for this issue is appreciated!

Check Worklight Client connection state

I'm working on a native iOS app that is using IBM Worklight server adapters, in my code every time I want to invoke a procedure I'm calling the WLClient().wlConnectWithDelegate(self) and then calling the adapter, is there a way that would let me check the connection status of the client before I invoke the adapter procedure?
There is no such API provided by the Worklight framework.
The idea behind the connect API is to establish a session between the client and server, negating the possibility for example of race condition (such as two adapter requests to the server, each getting its own session, potentially causing trouble), in addition to delivering data on headers that is no available in an adapter request compared to connect request.
I think that instead of making a connect request before invocation you can do it in an early stage in the app's lifecycle, as well as whenever the app returns to the foreground, to ensure that a session was established. This, coupled together wit an appropriate session timeout set in worklight.properties on the server-side.
More here: https://developer.ibm.com/mobilefirstplatform/documentation/getting-started-7-0/hello-world/connecting-to-the-mobilefirst-server/

Losing messages over lost connection xmpp

i went through this question
Lost messages over XMPP on device disconnected
but there is no answer.
When a connection is lost due to some network issue then the server is not able to recognize it and keeps on sending messages to disconnected receiver which are permanently lost.
I have a workaround in which i ping the client from server and when the client gets disconnected server is able to recognize it after 10 sec and save further messages in queue preventing them from being lost.
my question is can 100% fail save message delivery be achieved by using some other way i know psi and many other xmpp client are doing it.
on ios side i am using xmppframework
One way is to employ the Advanced Message Processing (AMP) on your server; another one is to employ the Message Delivery Receipts on your clients.
The former one requires an AMP-enabled server implementation and the initiating client has to be able to tell the server what kind of delivery status reports it wants (it wants an error to be returned if the delivery is not possible). Note that this is not bullet-proof anyway as there is a window between the moment the target client losts its connectivity with the server and the moment the TCP stack on the server's machine detects this and tells the server about it: during this window, everything sent to the client is considered by the server to be sent okay because there's no concept of message boundaries in the TCP layer and hence if the server process managed to stuff a message stanza's XML into the system buffers of its TCP connection, it considers that stanza to be sent—there's no way for it to know which bits of its stream did not get to the receiver once the TCP stack says the connection is lost.
The latter one is bullet-proof as the clients rely on explicit notifications about message reception. This does increase chattiness though. In return, no server support for this feature is required—it's implemented solely in the clients.
go with XEP-0198 and enjoy...
http://xmpp.org/extensions/xep-0198.html
For a XMPP client I'm working on, the following mechanism is used:
Add Reachability to the project, to detect quickly when the phone is having connectivity problems.
Use a modified version of XEP-0198, adding a confirmation sent by the server. So, the client sends a message, the server confirms with a receipt. Later on, the receiving user will also confirm with a receipt. For each message you send, you get two confirmations, one from the server, one from the client. This requires modifications on the server of course.
When the app is not connected to the XMPP server, messages are queued.
When the app is logged in again to the XMPP server, the app takes all messages which were not confirmed by the server and sends them again.
For this to work, you have to locally store the messages in the app with three possible states: "Not sent", "Confirmed by server", "Confirmed by user"

Resources