I am interacting with a web-controlled hardware device. You send it a request via a URL (e.g., http://device/on?port=1 or http://device/off?port=3) to turn stuff on and off, and it sends back "success" or "failure". It is a simple device, however, so while it's processing a request --- i.e., until it returns the status of the request that it's processing --- it will ignore all subsequent requests. It does not queue them up; they just get lost.
So I need to send serial, synchronous requests. I.e., req#1, wait for response#1, req#2, wait for response#2, req#3, wait for response #3, etc.
Do I need to manage my own thread-safe queue of requests, have the UI thread push requests into one end of the queue, and have another thread pull the requests off, one at a time, as soon as the previous one either completes or times out, and send the results back to the UI thread? Or am I missing something in the API that already does this?
Thanks!
...R
What should work is to use an NSOperationQueue instance, and a number of NSOperation instances that perform the various URL requests.
First, set up a queue in the class that will be enqueueing the requests. Make sure to keep a strong reference to it, i.e.
#interface MyEnqueingClass ()
#property (nonatomic, strong) NSOperationQueue *operationQueue;
#end
Somewhere in the implementation, say the init method:
_operationQueue = [[NSOperationQueue alloc] init];
_operationQueue.maxConcurrentOperationCount = 1;
You want basically a serial queue, hence the maxConcurrentOperationCount of 1.
After setting this up, you'll want to write some code like this:
[self.operationQueue addOperationWithBlock:^{
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:#"my://URLString"]];
NSError *error;
NSURLResponse *response;
NSData *responseData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
if (!responseData)
{
//Maybe try this request again instead of completely restarting? Depends on your application.
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
//Do something here to handle the error - maybe you need to cancel all the enqueued operations and start again?
[self.operationQueue cancelAllOperations];
[self startOver];
}];
}
else
{
//Handle the success case;
}
}];
[self.operationQueue addOperationWithBlock:^{
//Make another request, according to the next instuctions?
}];
In this way you send synchronous NSURLRequests and can handle the error conditions, including by bailing out completely and starting all over (the lines with -cancelAllOperations called). These requests will be executed one after the other.
You can also of course write custom NSOperation subclasses and enqueuing instances of those rather than using blocks, if that serves you.
Hope this helps, let me know if you have any questions!
You can use NSOperationQueue class and also use some API which are build in on it for example AFNetworking.
Related
I'm currently working on an iOS project that utilises the AWS SDK to download large media files to the device. I am using CloudFront to distribute the content and the downloads are working well, however I am having problems implementing a network queue for these operations. No matter what I try, all the files want to download at once.
I am using the AWSContent downloadWithDownloadType: method to initiate and monitor progress on the actual downloads.
I have tried using an NSOperationQueue and setting setMaxConcurrentOperationCount, and all the code blocks execute at once. :(
I have a feeling it might be configurable with AWSServiceConfiguration in the AppDelegate, but the documentation is extremely vague on what variables you can pass into that object... http://docs.aws.amazon.com/AWSiOSSDK/latest/Classes/AWSServiceConfiguration.html
Has anyone had any experience with this?
TIA
Your problem is most likely that you misunderstand an approach of asynchronous operations.
I have tried using an NSOperationQueue and setting
setMaxConcurrentOperationCount, and all the code blocks execute at
once. :(
It's difficult to say what's definitely wrong without seeing an actual code, however most likely it's tied to the following steps:
You create NSOperationQueue
You set maxConcurrentOperationsCount to 2 for example
You add 4 blocks to it with AWSContent downloadWithDownloadType:
You expect no more 2 downloads to be run simultaneously
What do you probably do wrong
The key is inside point 3. What exactly the block does? My guess is that it completes before actual download completes. So if you have something like:
NSOperationQueue *queue = [NSOperationQueue new];
queue.maxConcurrentOperationsCount = 2;
for (AWSContent *content in contentArray) { // Assume you already do have this array
[queue addOperationWithBlock:^() {
[content downloadWithDownloadType:AWSContentDownloadTypeIfNotCached
pinOnCompletion:YES
progressBlock:nil
completionHandler:^(AWSContent *content, NSData *data, NSError *error) {
// do some stuff here on completion
}];
}];
}
Your block exits before your download is finished, allowing next blocks to run on queue and starting further downloads.
What to try
You should simply add some synchronization mechanism to your block to let operation complete only on completion block. Say:
NSOperationQueue *queue = [NSOperationQueue new];
queue.maxConcurrentOperationsCount = 2;
for (AWSContent *content in contentArray) { // Assume you already do have this array
[queue addOperationWithBlock:^() {
dispatch_semaphore_t dsema = dispatch_semaphore_create(0);
[content downloadWithDownloadType:AWSContentDownloadTypeIfNotCached
pinOnCompletion:YES
progressBlock:nil
completionHandler:^(AWSContent *content, NSData *data, NSError *error) {
// do some stuff here on completion
// ...
dispatch_semaphore_signal(dsema); // it's important to call this function in both error and success cases of download to free the block from queue
}];
dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER); // or another dispatch_time if you want your custom timeout instead of AWS
}];
}
Effectively your answer is https://stackoverflow.com/a/4326754/2392973
You just schedule plenty of such blocks to your operation queue.
More reading
https://developer.apple.com/reference/dispatch
I have already implemented NSOperationQueue successfully in application.
I have one operation queue which might have 1000 of NSOperations like below.
#interface Operations : NSOperation
#end
#implementation Operations
- (void)main
{
NSURL *url = [NSURL URLWithString:#"Your URL Here"];
NSString *contentType = #"application/json";
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url];
[request setHTTPMethod:#"POST"];
[request addValue:contentType forHTTPHeaderField: #"Content-Type"];
NSError *err = nil;
NSData *body = [NSJSONSerialization dataWithJSONObject:postVars options:NSJSONWritingPrettyPrinted error:&err];
[request setHTTPBody:body];
[request addValue:[NSString stringWithFormat:#"%lu", (unsigned long)body.length] forHTTPHeaderField: #"Content-Length"];
[request setTimeoutInterval:60];
NSHTTPURLResponse *response = nil;
NSError *error = nil;
NSData *resData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
}
#end
Now for that queue I am adding all 1000 operations at a time.
I add operation like below.
Operations *operation = [[Operations alloc]init];
[downloadQueue addOperation:operation];
Now what happens time interval is 60 as [request setTimeoutInterval:60]
So think like after 60 seconds if 300 operations out of 1000 operations is finished then other 700 operations are throwing request time out error.
So what should I do in this case.
Can I resume failed operations? Or I should again make operation and add it in queue.
Is there any better mechanism than this one?
My initial inclination would be, yes, just create new operations for the timed out requests and toss 'em back into the queue. It just feels simpler since you already have logic for adding those operations for your requests. HOWEVER:
Be careful to not get into sort of an infinite loop. If just ONE of those fails indefinitely for whatever reason, your queue will keep on chugging. I'd keep a failure count so that you know to stop retrying a request after some finite number of attempts.
If you KNOW that a large number will always fail, consider batching them in some fashion. One option would be a chain of dependencies. eg. You could try adding 200 of your ops, a "wait" NSOperation, the next 200, another "wait" NSOperation, another "wait" op, etc. Make the first wait op be dependent on those first 200 requests. Then make the next 200 depend on that first wait op, and so on:
[batch 1: first 200 requests]
[wait 1: op that waits for all of batch 1]
[batch 2: next 200, each waits for wait 1]
[wait 2: op that waits for all of batch 2]
[batch 3: next 200, each waits for wait 2]
etc.
Basically like 2 but instead of a "wait" op, have a "done" op. Lets say you have an array of 1000 requests to send: Toss 200 into the queue, then a "done" op which depends on those 200. When the "done" op runs (by definition, AFTER those 200 are done), it can then pull the next 200 from the array and toss them in (plus a new "done" op).
(maybe even consider making the wait/done op "pause" a few seconds to give the server a breather)
In other words, with #2 and #3, I'm saying "blast 200 at once, wait, then blast the next 200, wait, etc." (200 is arbitrary on my part, you know better than I as to what the best number is for your situation)
You can implement the NSURLConnectionDelegate method connection:didFailWithError: to check if the synchronous request failed, and if the error was that the request timed out. Then retry the connection in the delegate callback.
I should note that Apple strongly encourages use of NSURLSession over NSURLConnection, as many methods in NSURLConnection are now deprecated.
The first point is - don't let them all run at the same time! You can easily flood a mobile data connection if you try to run even 10 requests at the same time. So, before anything else, set the maxConcurrentOperationCount of the queue to something like 5.
This alone will likely massively reduce your issue and you might not need to do anything else.
You should also really look at using NSURLSession, and in this way you could also look at removing the operation queue and using HTTPMaximumConnectionsPerHost to control how many requests are made at the same time.
If you do still get a failure then one of the 'recursive type' options as other answers discuss is a workable solution. I'd add an attempt count to the operation subclass to make it easy to track and if it fails then check how many times it's failed and decide whether to create a new copy of the operation and add it to the queue or to raise an alert.
Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.
I am developing a static library that needs to do some stuff in the background, without interacting with the main thread. To give you an idea, think of just logging some user events. The library must keep doing this stuff until the user exits the app or sends it to the background (pushes the home button) - in other words it needs to keep doing stuff inside a loop.
The only interaction between the main app thread and the spawned thread is that occasionally the main app thread will put some stuff (an event object) into a queue that the spawned thread can read/consume. Other than that, the spawned thread just keeps going until the app exists or backgrounds.
Part of what the spawned thread needs to do (though not all of it) involves sending data to an HTTP server. I would have thought that it would be easy to subclass NSThread, override its main method, and just make a synchronous call to NSUrlConnection with some sort of timeout on that connection so the thread doesn't hang forever. For example, in Java/Android, we just subclass Thread, override the start() method and call a synchronous HTTP GET method (say from Apache's HttpClient class). This is very easy and works fine. But from what I have seen here and elsewhere, apparently on iOS it is much more complicated than this and I'm more than a bit confused as to what the best approach is that actually works.
So should I subclass NSThread and somehow use NSUrlConnection? It seems the asynchronous NSUrlConnection does not work inside NSThread because delegate methods don't get called but what about the synchronous method? Do I somehow need to use and configure the RunLoop and set up an autorelease pool? Or should I use an NSOperation? It seems to me that what I am trying to do is pretty common - does anyone have a working example of how to do this properly?
As I understand it, to use NSURLConnection asynchronously you need a runloop. Even if you use an NSOperation you still need a runloop.
All the examples I have seen use the Main Thread to start NSURLConnection which has a runloop. The examples using NSOperation are set up so the operation is Concurrent which tells NSOperationQueue not to provide it's own thread, they then make sure that NSURLConnection is started on the main thread, for example via a call to performSelectorOnMainThread:
Here is an example:
Pulse Engineering Blog: Concurrent Downloads using NSOperationQueues
You can also search the Apple documentation for QRunLoopOperation in the LinkedImageFetcher sample which is an example class showing some ins and outs of this kind of thing.
(Although I'm not sure I actually saw any code that example showing how to run your own runloop, again this example relies on the main thread.)
I've used the grand central dispatch (GCD) methods to achieve this. Here is an example that worked for me in a simple test app (I'm not sure if it applies in a static library, but may be worth a look). I'm using ARC.
In the example, I am kicking off some background work from my viewDidLoad method, but you can kick it off from anywhere. The key is that "dispatch_async(dispatch_get_global_queue…" runs the block in a background thread. See this answer for a good explanation of that method: https://stackoverflow.com/a/12693409/215821
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long)NULL),
^(void) {
[self doStuffInBackground];
});
}
The doStuffInBackground method is running in the background at this point, so you can just use NSURLConnection synchronously. In my example here, the method loops making network calls until presumably some other code sets backgroundStuffShouldRun = false. A network call is made with a 10 second timeout. After the call, I'm updating a UI label just to show progress. Note that the UI update is performed with "dispatch_async(dispatch_get_main_queue()…". This runs the UI update on the UI thread, as required.
One potential issue with this background work: there isn't a way to cancel the http request itself. But, with a 10 second timeout, you'd be waiting a max of 10 seconds for the thread to abort itself after an outsider (likely some event in your UI) sets backgroundStuffShouldRun = false.
- (void)doStuffInBackground
{
while (backgroundStuffShouldRun) {
// prepare for network call...
NSURL* url = [[NSURL alloc] initWithString:#"http://maps.google.com/maps/geo"];
// set a 10 second timeout on the request
NSURLRequest* request = [[NSURLRequest alloc] initWithURL:url cachePolicy:NSURLCacheStorageAllowed timeoutInterval:10];
NSError* error = nil;
NSURLResponse *response = nil;
// make the request
NSData* data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
// were we asked to stop the background processing?
if (!backgroundStuffShouldRun) {
return;
}
// process response...
NSString* status = #"Success";
if (error) {
if (error.code == NSURLErrorTimedOut) {
// handle timeout...
status = #"Timed out";
}
else {
// handle other errors...
status = #"Other error";
}
}
else {
// success, handle the response body
NSString *dataAsString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
NSLog(#"%#", dataAsString);
}
// update the UI with our status
dispatch_async(dispatch_get_main_queue(), ^{
[statusLabel setText:[NSString stringWithFormat:#"completed network call %d, status = %#", callCount, status]];
});
callCount++;
sleep(1); // 1 second breather. not necessary, but good idea for testing
}
}
I have an NSOperationQueue that handles importing data from a web server on a loop. It accomplishes this with the following design.
NSURLConnect is wrapped in an NSOperation and added to the Queue
On successful completion of the download (using a block), the data from the request is wrapped in another NSOperation that adds the relevant data to Core Data. This operation is added to the queue.
On successful completion (using another block), (and after a specified delay) I call the method that started it all and return to step 1. Thus, i make another server call x seconds later.
This works great. I'm able to get data from the server and handle everything on the background. And because these are just NSOperations I'm able to put everything in the background, and perform multiple requests at a time. This works really well.
The ONLY problem that I currently have is that I'm unable to successfully cancel the operations once they are going.
I've tried something like the following :
- (void)flushQueue
{
self.isFlushingQueue = YES;
[self.operationQueue cancelAllOperations];
[self.operationQueue waitUntilAllOperationsAreFinished];
self.isFlushingQueue = NO;
NSLog(#"successfully flushed Queue");
}
where self.isFlushingQueue is a BOOL that I use to check before adding any new operations to the queue. This seems like it should work, but in fact it does not. Any ideas on stopping my Frankenstein creation?
Edit (Solved problem, but from a different perspective)
I'm still baffled about why exactly I was unable to cancel these operations (i'd be happy to keep trying possible solutions), but I had a moment of insight on how to solve this problem in a slightly different way. Instead of dealing at all with canceling operations, and waiting til queue is finished, I decided to just have a data structure (NSMutableDictionary) that had a list of all active connections. Something like this :
self.activeConnections = [NSMutableDictionary dictionaryWithDictionary:#{
#"UpdateContacts": #YES,
#"UpdateGroups" : #YES}];
And then before I add any operation to the queue, I simply ask if that particular call is On or Off. I've tested this, and I successfully have finite control over each individual server request that I want to be looping. To turn everything off I can just set all connections to #NO.
There are a couple downsides to this solution (Have to manually manage an additional data structure, and every operation has to start again to see if it's on or off before it terminates).
Edit -- In pursuit of a more accurate solution
I stripped out all code that isn't relevant (notice there is no error handling). I posted two methods. The first is an example of how the request NSOperation is created, and the second is the convenience method for generating the completion block.
Note the completion block generator is called by dozens of different requests similar to the first method.
- (void)updateContactsWithOptions:(NSDictionary*)options
{
//Hard coded for ease of understanding
NSString *contactsURL = #"api/url";
NSDictionary *params = #{#"sortBy" : #"LastName"};
NSMutableURLRequest *request = [self createRequestUsingURLString:contactsURL andParameters:params];
ConnectionCompleteBlock processBlock = [self blockForImportingDataToEntity:#"Contact"
usingSelector:#selector(updateContactsWithOptions:)
withOptions:options andParsingSelector:#selector(requestUsesRowsFromData:)];
BBYConnectionOperation *op = [[BBYConnectionOperation alloc] initWithURLRequest:request
andDelegate:self
andCompletionBlock:processBlock];
//This used to check using self.isFlushingQueue
if ([[self.activeConnections objectForKey:#"UpdateContacts"] isEqualToNumber:#YES]){
[self.operationQueue addOperation:op];
}
}
- (ConnectionCompleteBlock) blockForImportingDataToEntity:(NSString*)entityName usingSelector:(SEL)loopSelector withOptions:(NSDictionary*)options andParsingSelector:(SEL)parseSelector
{
return ^(BOOL success, NSData *connectionData, NSError *error){
//Pull out variables from options
BOOL doesLoop = [[options valueForKey:#"doesLoop"] boolValue];
NSTimeInterval timeInterval = [[options valueForKey:#"interval"] integerValue];
//Data processed before importing to core data
NSData *dataToImport = [self performSelector:parseSelector withObject:connectionData];
BBYImportToCoreDataOperation *importOperation = [[BBYImportToCoreDataOperation alloc] initWithData:dataToImport
andContext:self.managedObjectContext
andNameOfEntityToImport:entityName];
[importOperation setCompletionBlock:^ (BOOL success, NSError *error){
if(success){
NSLog(#"Import %#s was successful",entityName);
if(doesLoop == YES){
dispatch_async(dispatch_get_main_queue(), ^{
[self performSelector:loopSelector withObject:options afterDelay:timeInterval];
});
}
}
}];
[self.operationQueue addOperation:importOperation];
};
}
Cancellation of an NSOperation is just a request, a flag that is set in NSOperation. It's up to your NSOperation subclass to actually action that request and cancel it's work. You then need to ensure you have set the correct flags for isExecuting and isFinished etc. You will also need to do this in a KVO compliant manner. Only once these flags are set is the operation finished.
There is an example in the documentation Concurrency Programming Guide -> Configuring Operations for Concurrent Execution. Although I understand that this example may not correctly account for all multi-threaded edge cases. Another more complex example is provided in the sample code LinkedImageFetcher : QRunLoopOperation
If you think you are responding to the cancellation request correctly then you really need to post your NSOperation subclass code to examine the problem any further.
Instead of using your own flag for when it is ok to add more operations, you could try the
- (void)setSuspended:(BOOL)suspend
method on NSOperationQueue? And before adding a new operation, check if the queue is suspended with isSuspended?