NSURLConnection getting limited to a Single Connection at a time? - ios

OK - let's rephrase this whole question shall we?
Is there any way to tell if iOS is holding onto an NSURLConnection after it has finished & returned it's data?
I've got 2 NSURLConnections I'm instantiating & calling into a server with. The first one initiates the connection with the server and then goes into a COMET style long-polling wait while another user interacts with the request. The second one goes into the server and triggers a cancel mechanism which safely ends the first request and causes both to return successfully with a "Cancelled by you" message.
In the happy path case the Cancel button will never be clicked. But it's possible to click it and exit the current action.
This whole scenario works GREAT once. And then never works again (until the app is reset).
It's as though the first time thru one of the connections is never released and we are from then on limited to only a single connection because one of them is locked.
BTW I've tried NSURLConnection, AFNetwork, MKNetworkKit, ASIHTTPRequest - no luck what-so-ever with any other frameworks. NSURLConnection should do what I want. It's just ... not letting go of one of my connections.

I suspect the cancellation request in Step 2 is leaving the HTTP connection open.
I don't know exactly how the NS* classes work with respect to the HTTP/1.1 recommendation of at most two simultaneous connections, but let's assume they're enforcing at most two connections. Let's suppose the triggering code in Instance A (steps 1 and 3 of your example) cleans up after itself, but the cancellation code in Instance B (steps 2 and 4) leaves the connection open. That might explain what you are observing.
If I were you, I'd compare the code that runs in step 1 against the code that runs in step 2. I bet there's a difference between them in terms of the way they clean up after themselves.

If I'm not wrong,
iOS/Mac holds on to a NSURLConnection as long as the "Keep-Alive" header dictates it to.
But as a iOS developer you shouldn't be worried. any reason why you would like to know that?

So unfortunately with the lack of a real solution to this issue being found in all my testing I've had to implement simple polling to resolve the issue.
I've also had to implement iOS only APIs on the server.
What this comes down to is an API to send up a command and put it into a queue on the server, then using an NSTimer on the client to check the status of the of the queued item on a regular interval.
Until I can find out how to make multiple connections on iOS with long-polling this is the only working solution. Once I have a decent amount of points I'll gladly bounty them away for a solution to this :(

Related

Datasnap Clients callback connection KeepAlive do not work

I followed the guide in Delphi Labs: Datasnap XE - Callbacks ,
Callbacks seems to work good. Yet, leaving the client sides idle for more than a hour -- seems to cause clients callbacks stop working. I changed the server DSTCPServerTransport.KeepAliveEnabled, .KeepAliveInterval, .KeepAliveTime -- but it didn't help in any way.
Does anyone know how can I keep the clients connected overtime?
1: https://edn.embarcadero.com/article/41374
I also use Datasnap callbacks in several applications. My solution was to setup a timer that measures how long it takes for a specific message (eg '*ping') that was sent using BroadCastToChannel to be received by a registered callback on the same channel in the same application. I allow for 5 seconds in a mobile application, and if the echo of my ping isn't received in that time, I assume my callback isn't working anymore. I do what I call "recycle the callback". That is, I de-register the previous callback (causes no errors if it fails) and register a new one (my callback id's are timestamp based so they are all unique). My "ping timer" runs on 1 minute intervals which is often-enough for my application(s). This solution would be a lot of code to present here, so I hope my description will help you find a solution that works for you. Ask questions if you're unsure.

Asynchronous UI Testing in Xcode With Swift

I am writing an app that makes plenty of network requests. As usual they are
async, i.e. the call of the request method returns immediately and the result
is delivered via a delegate method or in a closure after some delay.
Now on my registration screen I sent a register request to my backend and
want to verify that the success UI is shown when the request finishes.
Which options are out there to wait for the request to finish, verify the
success UI and only after that leave the test method?
Also are there any more clever options than waiting for the request to finish?
Thanks in advance!
Trivial Approach
Apple implemented major improvements in Xcode 9 / iOS 11 that enables you to wait for the appearance of a UI element. You can use the following one-liner:
<#yourElement#>.waitForExistence(timeout: 5)
Advanced Approach
In general UI and unit tests (referred to as tests here) must run as fast as possible so the developer can run them often and does not get frustrated by the need to run a slow test suite multiple times a day. In some cases, there is the possibility that an (internal or security-related) app accesses an API that can only be accessed from certain networks / IP ranges / hosts. Also, most CI services offer pretty bad hardware and limited internet-connection speed.
For all of those reasons, it is recommended to implement tests in a way that they do no real network requests. Instead, they are run with fake data, so-called fixtures. A clever developer realizes this test suite in a way that source of the data can be switched using a simple switch like a boolean property. Additionally, when the switch is set to fetch real backend data the fixtures can be refreshed/recorded from the backend automatically. This way it is pretty easy to update the fake data and quickly detect changes of the API.
But the main advantage of this approach is speed. Your test will not make real network requests but instead run against local data what makes them independent on:
server issues
connection speed
network restrictions
This way you can run your tests very fast and thus much more often - which is a good way of writing code ("Test Driven Development").
On the other hand, you won't detect server changes immediately anymore since the fake data won't change when the backend data changes. But this is solved by simply refreshing your fixtures using the switch you have implemented because you are a smart developer which makes this issue a story you can tell your children!
But wait, I forgot something! Why this is a replacement for the trivial approach above - you ask? Simple! Since you use local data which is available immediately you also can call the completion handler immediately too. So there is no delay between doing the request and verifying your success UI. This means you don't need to wait which makes your tests even faster!
I hope this helps some of my fellows out there. If you need more guidance regarding this topic don't hesitate and reply to this post.
Cya!

C# 5 .NET MVC long async task, progress report and cancel globally

I use ASP.Net MVC 5 and I have a long running action which have to poll webservices, process data and store them in database.
For that I want to use TPL library to start the task async.
But I wonder how to do 3 things :
I want to report progress of this task. For this I think about SignalR
I want to be able to left the page where I start this task from and be able to report the progression across the website (from a panel on the left but this is ok)
And I want to be able to cancel this task globally (from my panel on the left)
I know quite a few about all of technologies involved. But I'm not sure about the best way to achieve this.
Is someone can help me about the best solution ?
The fact that you want to run long running work while the user can navigate away from the page that initiates the work means that you need to run this work "in the background". It cannot be performed as part of a regular HTTP request because the user might cancel his request at any time by navigating away or closing the browser. In fact this seems to be a key scenario for you.
Background work in ASP.NET is dangerous. You can certainly pull it off but it is not easy to get right. Also, worker processes can exit for many reasons (app pool recycle, deployment, machine reboot, machine failure, Stack Overflow or OOM exception on an unrelated thread). So make sure your long-running work tolerates being aborted mid-way. You can reduce the likelyhood that this happens but never exclude the possibility.
You can make your code safe in the face of arbitrary termination by wrapping all work in a transaction. This of course only works if you don't cause non-transacted side-effects like web-service calls that change state. It is not possible to give a general answer here because achieving safety in the presence of arbitrary termination depends highly on the concrete work to be done.
Here's a possible architecture that I have used in the past:
When a job comes in you write all necessary input data to a database table and report success to the client.
You need a way to start a worker to work on that job. You could start a task immediately for that. You also need a periodic check that looks for unstarted work in case the app exits after having added the work item but before starting a task for it. Have the Windows task scheduler call a secret URL in your app once per minute that does this.
When you start working on a job you mark that job as running so that it is not accidentally picked up a second time. Work on that job, write the results and mark it as done. All in a single transaction. When your process happens to exit mid-way the database will reset all data involved.
Write job progress to a separate table row on a separate connection and separate transaction. The browser can poll the server for progress information. You could also use SignalR but I don't have experience with that and I expect it would be hard to get it to resume progress reporting in the presence of arbitrary termination.
Cancellation would be done by setting a cancel flag in the progress information row. The app needs to poll that flag.
Maybe you can make use of message queueing for job processing but I'm always wary to use it. To process a message in a transacted way you need MSDTC which is unsupported with many high-availability solutions for SQL Server.
You might think that this architecture is not very sophisticated. It makes use of polling for lots of things. Polling is a primitive technique but it works quite well. It is reliable and well-understood. It has a simple concurrency model.
If you can assume that your application never exits at inopportune times the architecture would be much simpler. But this cannot be assumed. You cannot assume that there will be no deployments during work hours and that there will be no bugs leading to crashes.
Even if using http worker is a bad thing to run long task I have made a small example of how to manage it with SignalR :
Inside this example you can :
Start a task
See task progression
Cancel task
It's based on :
twitter bootstrap
knockoutjs
signalR
C# 5.0 async/await with CancelToken and IProgress
You can find the source of this example here :
https://github.com/dragouf/SignalR.Progress

Using AFNetworking to process multiple JSON responses for a single request

I'm trying to find a way to open up a connection to a web service and have that service send down JSON objects on an as-needed basis.
Say I request 20 profiles from a service. Instead of waiting for the service to build all 20, the service would build the first profile and throw it back down to the client until all 20 are created.
I've been using AFNetworking and would like to continue using it. Eventually I'd like to contribute this component back to the community if it requires an addition.
Anyone have any ideas on tackling something like this? Right now I have a service pushing JSON every few seconds to test with.
A couple of thoughts:
If you want to open a connection and respond to transmissions from the server, socket-based model seems to make sense. See Ray Wenderlich's How To Create A Socket Based iPhone App and Server for an example (the server-side stuff is likely to change based upon your server architecture, but it gives you an example). But AFNetworking is built on a NSURLConnection framework, not a socket framework, so if you wanted to integrate your socket classes into that framework, a non-inconsiderable amount of work would be involved.
Another, iOS-specific model is to use Apple's push notification service (see the push-related sections of the Local and Push Notification Programming Guide).
A third approach would be to stay with a pull mechanism, but if you're looking for a way to consume multiple feeds in a non-serial fashion would be to create multiple AFURLConnectionOperation (or the appropriate subclass) operations, and submit them concurrently (you may want to constraint maxConcurrentOperations on the queue to 4 or 5 as iOS can only have so many concurrent network operations). By issuing these concurrently, you mitigate many of the delays that result from network latencies. If you pursue this approach, some care might have to be taken for thread safety, but it's probably easier than the above two techniques.
This sounds like a job for a socket (or a web socket, whatever is easier).
I don't believe there is support for this in AF. This could be implemented in the NSURLConnection's didRecieveData method. This is triggered every time a piece of data is received, so you can do your parsing and messaging from that point. Unfortunately, I can't think of a very clean way to implement this.
Perhaps a better approach to this is to handle the appropriate rerequest via a pagination-style technique. You would request page 1 of profiles with 1/page, then request page 2, etc. You could then control the flow, i.e. if you want to request all in paralel or request one then the next sequentially. This would be less work to implement, and would (in my opinion) be cleaner and easier to maintain.
AFNetworking supports batching of requests with AFHTTPClient -enqueueBatchOfHTTPRequestOperations:progressBlock:completionBlock:.
You can use this method to return on each individual operation, as well as when all of the operations in the batch have finished.

How IIS requests are parallelized using COMET?

I have an ASP.NET MVC 2 Beta application where I need to block incoming requests for a specific action until I have some data available to return or just release the request after 30 seconds with no new data available.
In order to accomplish this, I'm using AutoResetEvent.WaitOne(30000);
The big issue is that IIS does not seem to be accepting any new request while the thread is blocked at the WaitOne instruction. New requests get hung till the thread releases.
I need to be able to parallelize the requests while still keeping the WaitOne behavior.
Async handlers are what you're looking for. If you're building a comet solution, you may want to check out our .NET implementation of a comet server here, it'll save you some time. If you're wanting to roll your own, you'll definately need to use the async handlers to avoid hitting upper concurrency limits by the time you get past 60 or 70 users, but even with the async handlers, you'll still have to do some fancy footwork. Basically, you're still going to hit some upper limits in the threadpool unless you hand off the requests into a bounded thread pool that can basically manage all the incoming requests for you.
Good luck!
You should not be blocking incoming requests at all. If the data you need are not ready, then return an empty response, or perhaps return an error code.
For a web application, it is more advisable (not a hard rule) to return a message to tell the users to retry again later due to whatever reason you want to call it.
Stalling/blocking the requests by 'waiting' doesn't really help much as the wait is undeterministic, unless of course you have a mechanism to make it so.
I do not know the nature/context/traffic pattern of your website. 30 seconds can be a number that works for you. Perhaps my points above are not really relevant, just my 2 cents.
Actually, it turns out that this behavior only happens with ASP.NET MVC 2 Beta. I had this working fine with MVC 2 Preview 2 and rolled back to this version to re-test and confirmed that the application worked fine with that version.
Now, the question is: Why am I seeing this different behavior between these two MVC release versions, and what is the correct behavior I should expect to get in this scenario?

Resources