Twitter Connection hang after hours of streaming - Streaming API - twitter

Currently I am working with Streaming API (https://stream.twitter.com/1/statuses/filter.json).
After connection successful, i will record all the tweet from server into database. The problem only occurs after several hours or days of running without issue, and then no more tweets can be retrieved. If I restart the client, processing resumes as normal and everything runs fine until the next hang.

One way is you could restart the service at certain interval, let say every 3 hours so it will stay connected and minimize the issue.

Related

Twilio WebRTC TURN relay randomly stops working after a few minutes

I am using the Twilio Network Traversal Service as part of a native application I am working on to perform peer-to-peer remote desktop connections. We implement a subset of the WebRTC protocol stack that is equivalent to the WebRTC data channels (not the WebRTC video and audio protocols). When using a TURN relay, the TURN allocation seems to be invalidated randomly somewhere between a few minutes and a maximum of 12 minutes from the session start. This issue looks very similar to this one, but the proposed workaround (sending silent audio) is not acceptable in my case, since I do not implement the WebRTC audio/video protocols.
I have been pulling my hair on this problem for the last two weeks, and isolated the issue as being the Twilio service itself. To compare, I have used a web based WebRTC data channel demo using firefox and the Xirsys TURN server cloud. I have wireshark captures showing firefox getting disconnected with Twilio just like my native application, while the exact same firefox demo doesn't get disconnected when using the Xirsys servers.
I was using Xirsys originally, but I experienced some instability with their service that made me switch to Twilio, which is why I would rather have Twilio fix this issue instead of going back with Xirsys. At the bare minimum, I would rather have two WebRTC hosting providers I can choose from that I know should work fine. This is why I am taking the time to explain the issue in detail so it can get fixed.
Here are two wireshark captures (with the peer-to-peer data messages filtered out) showing firefox using WebRTC data channels and the Twilio TURN relay servers:
The traffic stops being relayed after 4 minutes in the first capture, and after about 11 minutes in the second capture. In both captures, firefox detects that traffic stops being relayed (at the data channel level) and attempts a graceful disconnection by sending a Refresh request packet with a lifetime of zero. Both graceful disconnections result in a 437 Allocation Mismatch error, indicating that the server doesn't even know about the allocation firefox is trying to close gracefully.
With my native application, this would often take the form of a CreatePermission Request message that fails with a 438 "Wrong nonce" error, which is basically what should happen if a client tries to update the permission on an allocation that no longer exists. The error code 438 usually means "Stale nonce", which is not really an error, but an indication that the nonce has expired and the client should try again using the new nonce contained in the "error" message. It took me a while to figure out, but even if the error code is 438, the error string is not the same. I have observed a true stale nonce error with Xirsys and successfully updated my permission with the new nonce from the error response, so I know I can properly handle this case in my implementation.
Here is the source code for the WebRTC data channel demo I have used:
https://github.com/devolutions/webrtc-demo
For comparison, here is the same firefox data channel demo using the Xirsys TURN server cloud:
In this capture, I have let the demo run for about 16 minutes (it works for much longer than that, the longest I have tried is two hours). We can see that the traffic keeps getting relayed for the entire duration of the session, and CreatePermission requests keep getting sent by firefox with success. At the end, the graceful disconnection is triggered by firefox closing the WebRTC data channel (instead of being closed due to traffic no longer being relayed). As opposed to the Twilio captures, the Refresh request with a lifetime of zero is successful: the Xirsys TURN server still knows about the allocation and sends back a success response, as expected.
It should be noted that the ICMP unreachable errors are normal because I think in this case firefox is no longer listening on the given port when the response comes back. In other words, it sends the Refresh request with a lifetime of zero and doesn't wait for the answer to come back.
For the time being, I have no other choice but to go back with Xirsys, but I would really like if the Twilio Network Traversal Service could be fixed. Let me know if you have more questions regarding the issue.
I have uploaded the wireshark captures here for reference.
EDIT: I have modified the webrtc demo page such that it doesn't close the connection when the ice connection state is set to 'disconnected'. Now I get the real disconnection when the ice connection state goes to 'failed'. However, it effectively didn't change anything, since in this case it takes just a few seconds more for the state to go from 'disconnected' to 'failed'.
Since I have new relevant screenshots and captures, I am updating the original question to clarify certain problems pointed out by Philipp Hancke:
First, here is a new capture with the ice connection state fix (the browser closes the connection only when the state goes to 'failed'):
It's interesting to see that this time, the session stayed up for a whole 18 minutes. This was taken on a saturday morning, so I'm guessing that the issue could be related to the current workload on the twilio servers. However, it failed in the exact same way as it always does so far for me. As a bonus, we even have a valid stale nonce response that is correctly handled by firefox.
However, if we take a different view of the same capture, we can see that the traffic stops being relayed for a solid 30 seconds before firefox considers the connection as being dropped and sends the Refresh request with a lifetime of zero. As in previous captures, the server responds with an Allocation Mismatch error, indicating it doesn't know which allocation firefox is talking about.
The last eight packets being sent are of the same size, so my guess is that they are retransmissions. After 30 seconds of retransmissions, it is likely that SCTP considers the transport as being dropped.
With regards to the refresh request with a lifetime of zero, I did a test where I close the connection early on, from the browser. In this case, the server recognizes the allocation and returns a success response:
The allocation mismatch is the easiest symptom to observe, but in my testing with my native application, I have seen similar errors with Refresh requests for non-zero lifetimes, and with CreatePermission requests (438 "Wrong nonce" error). However, since the browser closes the connection after 30 seconds of data not being relayed, it is hard to observe these errors with the current webrtc demo. If we could change that timeout to 10 minutes, we would see those errors as well.
Excellent problem description!
Without the server log this is hard to determine what goes wrong. I tried with the appear.in TURN servers which run an up-to-date version of coturn and show the same behaviour as the Twilio servers. Xirsys seems to be running a custom version of coturn (Coturn-0.5 'Xirsys Turn Services' from the software field but coturn never had such a version).
In both captures, firefox detects that traffic stops being relayed (at the data channel level) and attempts a graceful disconnection by sending a Refresh request packet with a lifetime of zero.
Not quite. A refresh request with a lifetime of 0 is used to discard an allocation. At that point it does not matter what the server returns as the connection is beyond repair anyway.
This is caused by peerjs closing the peerconnection if the iceconnectionstate changes to disconnected, here in your bundled library version.
This is overly aggressive (and does not even fix things) and we've had a discussion about what the specification should do wrt to trying to fix things with an ice restart here which also links to a great explanation of the disconnected state.
The disconnected state probably happens because a few packets get lost. But this is something that can happen when there is minor congestion. I'd recommend removing the pc.close() in the disconnected case.
If you are looking for other TURN providers, Tokbox provides the same service. For datachannels the latency of a properly run distributed TURN network does not matter as much as for VoIP so you might run your own servers in a single location instead.

Objective-C - How to prevent session id reusing when app terminated?

My main question is how to detect the application termination by the end user when it was in the background (Suspended) to be able to send logout request to the server ?
We already have a timeout interval in the server to kill the session, but assume that the interval is 5 minutes so this means that the session will be alive for 5 minutes after the user terminated the app and anyone can sniff on the data and reuse it.
Notes:
We use HTTPS connection and SSL Certificate Pining.
We also implemented a heartbeat web service to be called by client app every fixed interval to tell the server to keep the session alive for this interval, if this web service didn't call for specific session, the server will kill this session.
Once your app is suspended you don't get any further notice before you are terminated. There is no way to do what you want.
Plus, the user could suspend your app to do something else (like play a game) and then not go back to your app for DAYS.
If you want to log out when the user leaves your app, do it on the willBeSuspended message. Ask for more background time and send a logout right then and there.
Mohamed Amer,
Here is an approach used by Quickblox Server and I feel its pretty much solid though it involves a little overhead.
Once the client application (either iOS android) establishes the session with quickblox server, quickblox server expects the client application to send the presence information to server after a regular interval continuously.
Sending the presense information is pretty much simple. They have written a api which we keep hitting after a interval of 5 mins with session id that we have. They validate the session id and once found valid they will extend the expiration time for the user ascociated with that id for 5 mins more.
What they will do I believe is that,
Approach 1 : they maintain the last hit time and for all the subsequesnt request they check if the request time is within the the time frame of 5 min if yes simply process it. If the request comes after 5 min they will delete the session id for the user and respond saying you have timeout the session.
Approach 2 : Because they provide online and offline info as well they cant simply depend on the incoming request to delete the session id from server so they probably create a background thread which swipes over the db to find the entry with last hit time greater then 5 min and removes it from DB. and declares the user session expired.
Though this involves client apps continously hitting the server and increases the burden on the server for the app like chat application in which presense information is so vital this overhead is still fine i believe.
Hope I have provided you with some idea at least :)

Http request (GET) timeout sometimes and I want to find the cause

I am developing an iOS app that makes an API request to my server hosted in Heroku.
In my slow internet connection environment, the API request (via Http Get) sometimes results in a timeout. The response time is usually 2000 ms if not timing out.
By "sometimes", I mean about one in 10 requests times out (I do not get any meaningful error code).
I also tested this timeout with 2 devices. When one device is waiting for the server to respond for longer than 2000 ms, I use another device to call the api, to which the server responds normally. But the first device still results in a timeout.
I am not quite sure what is to blame here. My internet connection? My api server on Heroku? I also tested this timeout on Postman and got the same results.
PS. I am based out of Bangkok. The ISP with which I experience the most timeouts is True Broadband.
Any and all advice is appreciated.
Thanks in advance
PPS. In response to comments warning that the question is too broad: Let's ask it this way. If our api calls randomly time out, how can we detect whether it is due to a slow internet connection, or if the fault lies in our own server (or something else)?

ping server when the app is in background

I need to keep the app to send a simple GET request ("ping") to the server every minute just for it to know that user is still online. I have no problem with it while the app is active and 10 minutes after it went background using "beginBackgroundTaskWithExpirationHandler".
But how can I keep request repeated all the time unless user terminates app manually?
That's not really how it works -- it's not a Windows or a Mac app that runs continually in the background.
What you probably want to do is use the background fetch process, whereby iOS will periodically run a particular method allowing you to do network operations while your app is in the background (or not running at all).
What this won't allow you to do is ping your server exactly once every minute. But actually that's a good thing. Consuming a users battery and data allowance like that is not very user-friendly.

iPhone network requests are not working sometimes

In my app I'm using MKNetworkKit to make requests to my server and also make direct calls to Facebook using Facebook SDK. Sometimes (once in a few days) I get an odd issue: all network requests from my app stop working (both to my server and Facebook). I can't figure out what steps exactly cause this. When I have the issue I check other apps / Safari from my phone and internet connection is OK. If I close my app, wait for a couple of minutes and open it again then everything goes back to normal. I'm using WiFi for testing. What can cause this?
It seems to be exactly like in this question: iPhone app gets into a state where network requests never complete , although I don't use MonoTouch.
UPDATE:
I do receive callbacks from MKNetworkKit saying "Request timeout" (I have 10 seconds request timeout in MKNetworkKit settings). And I don't receive callbacks from Facebook.

Resources