I want to achieve the following task using NSThread
I've a main thread and three (3) workerThread T1, T2, T3. All these started at the same time from main thread, Main thread has an int size variable. Now I want to synchronize all three threads in a way they, when my each of the above threads will execute, it will print the following:
//in main thread
- (void) mainFunction{
size = 0;
NSThread* T1 = [[NSThread alloc] initWithTarget:self
selector:#selector(workerThread:)
object:#"T1"];
[T1 start];
NSThread* T2 = [[NSThread alloc] initWithTarget:self
selector:#selector(workerThread:)
object:#"T2"];
[T2 start];
NSThread* T3 = [[NSThread alloc] initWithTarget:self
selector:#selector(workerThread:)
object:#"T3"];
[T3 start];
}
// worker thread
- (void) workerThread:(id)obj{
size++;
NSLog(#"Thread:%#--------Size:%d,obj,size)
}
I want following output:
Thread:T1-------Size:1
Thread:T2-------Size:2
Thread:T3-------Size:3
Thread:T1-------Size:4
Thread:T2-------Size:5
Thread:T3-------Size:6
Thread:T1-------Size:7
Thread:T2-------Size:8
Thread:T3-------Size:9
and return the control back to main thread when size=10
A couple of thoughts:
You say "return control back to main thread when size=10". That doesn't quite make sense. The main thread never "lost" control (as these threads are happening concurrently). Perhaps you wanted something to happen on the main thread when this situation arose?
You're not having the workerThread method do any looping, so as you've written it, each thread will do this once and then quit. You probably need to add some form of loop here.
Even if you added looping, your desired output suggests that you're assuming a particular sequence of actions that would take place, namely that these three threads will run in order (but you have no such assurances). If you needed that behavior, you'd set up a series of semaphores by which you could have one thread waiting for a signal to be sent by another.
You should be careful when updating a variable from different threads. See the Synchronization section of the Threading Programming Guide. It's simplified when dealing with a fundamental data type like your counter (just make sure you declare it as atomic). But in more substantive scenarios, you'll want to employ some synchronization technique such as #synchronized, locks, dedicated custom serial queue, etc.
In general, if you're using threads (but not necessary if using queues), you should be creating an autorelease pool for your thread.
Anyway, with these observations aside, you might have something like the following, which (a) has #autoreleasepool; (b) loops; and (c) uses a lock to make sure that the various threads synchronize their interactions with the size variable:
- (void) workerThread:(id)obj
{
#autoreleasepool {
BOOL done = NO;
while (!done) {
[self.lock lock];
if (size < 9) {
size++;
NSLog(#"Thread:%#--------Size:%d", obj, size);
}
else
{
done = YES;
}
[self.lock unlock];
// perhaps you're doing something time consuming here...
}
}
}
This assumes you have NSLock property, called lock:
#property (nonatomic, strong) NSLock *lock;
Which you created before initiating your thread test:
- (void) threadTest
{
size = 0;
self.lock = [[NSLock alloc] init];
NSThread* T1 = [[NSThread alloc] initWithTarget:self selector:#selector(workerThread:) object:#"T1"];
[T1 start];
NSThread* T2 = [[NSThread alloc] initWithTarget:self selector:#selector(workerThread:) object:#"T2"];
[T2 start];
NSThread* T3 = [[NSThread alloc] initWithTarget:self selector:#selector(workerThread:) object:#"T3"];
[T3 start];
}
Having said all this, you started this with "return control back to the main thread". As I said earlier, there's really no need to do that because in your example, your app's main thread never yielded control
For controlling relationships between tasks happening on different threads, I might suggest using GCD or operation queues. They're easier to use and have better mechanism for controlling dependencies between various tasks/operations (see Concurrency Programming Guide).
For example, consider the operation-based equivalent to your above workerThread method (identical, but no autorelease pool is needed):
- (void) operationMethod:(id)obj
{
BOOL done = NO;
while (!done) {
[self.lock lock];
if (size < 9) {
size++;
NSLog(#"Operation:%#--------Size:%d", obj, size);
}
else
{
done = YES;
}
[self.lock unlock];
// perhaps you're doing something time consuming here...
}
}
You could then create three operations (which probably will run on three threads) and wait for the result, like so:
- (void) operationTestWait
{
size = 0;
self.lock = [[NSLock alloc] init];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSOperation *op1 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op1"];
NSOperation *op2 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op2"];
NSOperation *op3 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op3"];
[queue addOperations:#[op1, op2, op3] waitUntilFinished:YES];
// do here whatever should happen when the operations are done
}
In this case, the main thread will wait for these three operations to finish.
Or, better, if these tasks take more than a few milliseconds, you should not have the main queue wait (since you should never block the main queue), but rather, you should simply define what you want the main queue to do when the three operations are done:
- (void) operationTest
{
size = 0;
self.lock = [[NSLock alloc] init];
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSOperation *op1 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op1"];
NSOperation *op2 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op2"];
NSOperation *op3 = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(operationMethod:) object:#"Op3"];
NSOperation *completion = [NSBlockOperation blockOperationWithBlock:^{
// if you want this to do something on the main queue, then have this add it to the main queue
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// do here whatever should happen when the operations are done
}];
}];
// define this completion operation to be dependent upon the above three operations
[completion addDependency:op1];
[completion addDependency:op2];
[completion addDependency:op3];
// now add all of them, but don't wait until finished;
// but the completion operation will only start when its dependencies
// are resolved
[queue addOperations:#[op1, op2, op3, completion] waitUntilFinished:NO];
}
Forgive the long-winded answer. If you can give us a more practical example of what these various threads will be doing and we can provide better counsel on how to best tackle it. But, in general, operation queues or dispatch queues will probably be more efficient way to tackle most concurrency related challenges.
Related
Maybe I let NSOperation to play a wrong role in non-concurrent job. My requirement is , I want to do a lot of async jobs, but I want them to be completed in order. When task1 is finished after the async callback, task2 can be take into work now. And I make all the task a NSOperation. However, NSOperation is used to multiple thread programming most time. Is my choice wrong. But it remind me to think more about the NSOperation in this case, we can't manage manually the isFinished and isExecute in a sync block since the operation have been release in non-concurrent nsoperation,it means i couldnot use the powerful operation queue to automatically manage the task.Any idea?Thanks for your answer..
edit with code :
-(void)main {
[super main];
self.isOperationExcuting = YES;
self.isOperationFinished = NO;
WEAKSELF
[self query:^(NSArray *array, NSError *error) {
//I set my custom property, but it do not cause my NSOperation to be finished
weakSelf.isOperationFinished = YES;
weakSelf.isOperationExcuting = NO;
}];
}
-(void)query:(void (^)(NSArray *array, NSError *error))block {
BmobQuery *query = [BmobQuery queryWithClassName:#"Room"];
[query findObjectsInBackgroundWithBlock:block];
}
-(BOOL)isFinished {
return self.isOperationFinished;
}
- (BOOL)isExecuting {
return self.isOperationExcuting;
}
- (void)start {
[super start];
NSLog(#"start");
}
- (void)cancel {
[super cancel];
NSLog(#"cancel");
}
Just make all added operations serial by setting maxConcurrentOperationCount to 1.
NSOperationQueue* queue = [[ NSOperationQueue alloc ] init];
queue.maxConcurrentOperationCount = 1;
[queue addOperation:operation1];
[queue addOperation:operation2];
[queue addOperation:operation3];
NSOperationQueue
The NSOperationQueue class regulates the execution of a set of
NSOperation objects. After being added to a queue, an operation
remains in that queue until it is explicitly canceled or finishes
executing its task.
If I create an NSOperation via NSInvocationOperation, does completion of the chosen selector cause the NSOperation to complete and be removed from the operation queue?
For example:
...
NSDictionary* params = #{KEY_SERVER_ID:serverId, KEY_USERNAME:username, KEY_PASSWORD:password};
NSInvocationOperation* op = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(publishBulletinBoardRead:) object:params];
[[NSOperationQueue currentQueue] addOperation:op];
...
When publishBulletinBoardRead: returns, can I assume that the operation is removed from the queue?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to send 100 network requests to my server one-by-one and get notified when the 100th is done.
I'm using AFNetworking and was thinking about a solution of this problem. Can anyone recommend me something?
A couple of thoughts:
If really just going to run each request serially (i.e. one after another), you could do:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 1;
NSOperation *completionOperation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"All operations done");
}];
for (NSInteger i = 0; i < operationCount; i++) {
AFHTTPRequestOperation *operation = ... // create your operation here
[completionOperation addDependency:operation];
[queue addOperation:operation];
}
[queue addOperation:completionOperation];
Note, using operation queue like this offers the advantage that you can easily cancel all the operations in that queue should you ever need to.
If the order that these are performed is critical, you might want to establish explicit dependencies between the operations, e.g.:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
queue.maxConcurrentOperationCount = 1;
NSOperation *completionOperation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"All operations done");
}];
NSOperation *priorOperation = nil;
for (NSInteger i = 0; i < operationCount; i++) {
AFHTTPRequestOperation *operation = ... // create your operation here
[completionOperation addDependency:operation];
if (priorOperation) [operation addDependency:priorOperation];
[queue addOperation:operation];
priorOperation = operation;
}
[queue addOperation:completionOperation];
The question for me is whether you absolutely only want to run one at a time. You pay a significant performance penalty for that. Generally you'd use that first code sample (where the only explicit dependencies are to the completion operation) and set maxConcurrentOperationCount to something like 4, enjoying concurrency and its consequent significant performance gain (while at the same time, constraining the degree of concurrency to some reasonable number that won't use up all of your worker threads, risk having requests time out, etc.).
You haven't said what these 100 operations are, but if it's a bunch of downloads, you might want to consider a "lazy loading" pattern, loading the data asynchronously as you need it, rather than all at once.
If downloading images, for example, you might achieve this using the AFNetworking UIImageView category.
This is a specific form of a common question, which is "how do I call a sequence of block operations and get notified when the last one finishes?"
One idea is to make a "to-do list" using the parameters for each request. Say each request takes a number 0..99. Now pseudo-code would looks like this:
#property(nonatomic, copy) void (^done)(BOOL); // we'll need to save a completion block
#property(nonatomic, strong) NSMutableArray *todo; // might as well save this too
- (void)makeRequestsThenInvoke:(void (^)(BOOL))done {
self.todo = [NSMutableArray arrayWithArray:#[#99, #98, #97 ... #0]];
// make this in a loop using real params to your network request (whatever distinguishes each request)
self.done = done;
[self makeRequests];
}
- (void)makeRequests {
if (!self.todo.count) { // nothing todo? then we're done
self.done(YES);
self.done = nil; // avoid caller-side retain cycle
return;
}
// otherwise, get the next item todo
NSNumber *param = [self.todo lastObject];
// build a url with param, e.g. http://myservice.com/request?param=%# <- param goes there
[afManager post:url success:success:^(AFHTTPRequestOperation *operation, id responseObject) {
// handle the result
// now update the todo list
[self.todo removeLastObject];
// call ourself to do more, but use performSelector so we don't wind up the stack
[self performSelector:#selector(makeRequests) withObject:nil afterDelay:0.0];
}];
}
I've got an NSOperation queue, and four NSOperations which run in it.
NSOperationQueue myQueue = [[NSOperationQueue alloc] init];
NSOperation readOperation = [[NSOperation alloc] init];
NSOperation postOperation = [[NSOperation alloc] init];
NSOperation deleteOperation = [[NSOperation alloc] init];
I'm aware a cancel can be called an NSOperation object. If I call a
[postOperation cancel];
does it get cancelled immediately from myQueue?
Also I would like to cancel the deleteOperation from the postOperation.
Does this work?
postOperation = [NSBlockOperation blockOperationWithBlock: ^{
[deleteOperation cancel];
/**** do a HTTP post ****/
}];
[myQueue addOperation:postOperation];
Essentially I want to cancel a delete operation before I do the POST, if if that operation was executing. Also does
[myQueue setMaxConcurrentOperationCount:1];
ensure that the operation queue is FIFO?
Per NSOperation documentation:
... if an operation is in a queue but waiting on unfinished dependent operations, those operations are subsequently ignored. ... allows the operation queue to call the operation’s start method sooner and clear the object out of the queue.
the queue will call the operation's start methods immediately which should then mark it as finished without doing any useful work.
Note that you could override this method is subclasses. Apple asks you to create the same behavior as in NSOperation, but it's still up to the developer.
does [myQueue setMaxConcurrentOperationCount:1]; ensure that the operation queue is FIFO?
That's a separate question. The answer is no. You don't have control over order of operations other then setting dependencies (which is what you should be doing).
Apple's Grand Central Dispatch (GCD) is great, but only works on iOS 4.0 or greater. Apple's documentation says, "[A] serialized operation queue does not offer quite the same behavior as a serial dispatch queue in Grand Central Dispatch does" (because the queue is not FIFO, but order is determined by dependencies and priorities).
What is the right way to achieve the same effect as GCD's serial dispatch queues while supporting OS versions before GCD was released? Or put another way, what is the recommended way to handle simple background processing (doing web service requests, etc.) in iOS apps that want to support versions less than 4.0?
How about this PseudoSerialQueue? It is a minimal implementation like the Dispatch Serial Queue.
#import <Foundation/Foundation.h>
#interface PseudoTask : NSObject
{
id target_;
SEL selector_;
id queue_;
}
#property (nonatomic, readonly) id target;
- (id)initWithTarget:(id)target selector:(SEL)selector queue:(id)queue;
- (void)exec;
#end
#implementation PseudoTask
#synthesize target=target_;
- (id)initWithTarget:(id)target selector:(SEL)selector queue:(id)queue;
{
self = [super init];
if (self) {
target_ = [target retain];
selector_ = selector;
queue_ = [queue retain];
}
return self;
}
- (void)exec
{
[target_ performSelector:selector_];
}
- (void)dealloc
{
[target_ release];
[queue_ release];
}
#end
#interface PseudoSerialQueue : NSObject
{
NSCondition *condition_;
NSMutableArray *array_;
NSThread *thread_;
}
- (void)addTask:(id)target selector:(SEL)selector;
#end
#implementation PseudoSerialQueue
- (id)init
{
self = [super init];
if (self) {
array_ = [[NSMutableArray alloc] init];
condition_ = [[NSCondition alloc] init];
thread_ = [[NSThread alloc]
initWithTarget:self selector:#selector(execQueue) object:nil];
[thread_ start];
}
return self;
}
- (void)addTask:(id)target selector:(SEL)selector
{
[condition_ lock];
PseudoTask *task = [[PseudoTask alloc]
initWithTarget:target selector:selector queue:self];
[array_ addObject:task];
[condition_ signal];
[condition_ unlock];
}
- (void)quit
{
[self addTask:nil selector:nil];
}
- (void)execQueue
{
for (;;) {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[condition_ lock];
while (array_.count == 0)
[condition_ wait];
PseudoTask *task = [array_ objectAtIndex:0];
[array_ removeObjectAtIndex:0];
[condition_ unlock];
if (!task.target) {
[pool drain];
break;
}
[task exec];
[task release];
[pool drain];
}
}
- (void)dealloc
{
[array_ release];
[condition_ release];
}
#end
How to use:
PseudoSerialQueue *q = [[[PseudoSerialQueue alloc] init] autorelease];
[q addTask:self selector:#selector(test0)];
[q addTask:self selector:#selector(test1)];
[q addTask:self selector:#selector(test2)];
[q quit];
Seems like people are going to a lot of effort to rewrite NSRunloop. Per the NSRunloop documentation:
Your application cannot either create
or explicitly manage NSRunLoop
objects. Each NSThread object,
including the application’s main
thread, has an NSRunLoop object
automatically created for it as
needed.
So surely the trivial answer would be, to create a usable queue:
- (void)startRunLoop:(id)someObject
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[[NSRunLoop currentRunLoop] run];
[pool release];
}
...
NSThread *serialDispatchThread = [[NSThread alloc]
initWithTarget:self
selector:#selector(startRunLoop:)
object:nil];
[serialDispatchThread start];
To add a task to the queue:
[object
performSelector:#selector(whatever:)
onThread:serialDispatchThread
withObject:someArgument
waitUntilDone:NO];
Per the Threading Programming Guide section on Run Loops:
Cocoa defines a custom input source
that allows you to perform a selector
on any thread. ... perform selector requests are
serialized on the target thread,
alleviating many of the
synchronization problems that might
occur with multiple methods being run
on one thread.
So you've got an explicitly serial queue. Of course, mine isn't fantastically written because I've told the run loop to run forever, and you may prefer one you can terminate later, but those are easy modifications to make.
you can simulate it using NSOperationQueue, then just set the task count to one.
EDIT
-- oops, should have read more carefully. the fifo solution follows:
i can't think of a way that the majority of ios devs would use in your situation.
i'm not afraid of writing threaded programs, so here is one solution:
create a fifo worker queue that:
supports locking
holds one NSOperationQueue
holds an NSOperation subclass, designed to pull workers from the fifo queue in its implementation of main. only one may exist at a time.
holds an NSArray of workers to be run (defining a worker is up to you - is it an NSInvocation, class, operation, ...)
the NSOperation subclass pulls the workers from the fifo worker queue until the fifo worker queue is exhausted.
when the fifo work queue has workers and no active child operation, it creates a child operation, adds it to its operation queue.
there are a few pitfalls if you aren't comfortable writing threaded programs -- for this reason, this solution is not ideal for everybody, but this solution would not take very long to write if you are already comfortable using all the technologies required.
good luck
There are things NSOperationQueue documentation writer forgot to mention, making such implementation seem trivial when in fact it's not.
Setting the maximum concurrent operation count to 1 is guaranteed to be serial only
if NSOperations are added to the queue from same thread.
I'm using another option because it just works.
Add NSOperations from different threads but use NSCondition to manage queuing.
startOperations can (and should, you don't want to block main thread with locks) be called with performSelectorOnBackgroundThread...
startOperations method represents single job that consists of one or more NSOperations.
- (void)startOperations
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[[AppDelegate condition] lock];
while (![[[AppDelegate queue] operations] count] <= 0)
{
[[AppDelegate condition] wait];
}
NSOperation *newOperation = [alloc, init]....;
[[AppDelegate queue] addOperation:newOperation];
[[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!
NSOperation *newOperation1 = [alloc, init]....;
[[AppDelegate queue] addOperation:newOperation1];
[[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!
NSOperation *newOperation2 = [alloc, init]....;
[[AppDelegate queue] addOperation:newOperation2];
[[AppDelegate queue] waitUntilAllOperationsAreFinished]; // Don't forget this!
// Add whatever number operations you need for this single job
[[AppDelegate queue] signal];
[[AppDelegate queue] unlock];
[NotifyDelegate orWhatever]
[pool drain];
}
That's it!
If the processing is in the background anyway, do you really need it to be strictly in-order? If you do, you can achieve the same effect simply by setting up your dependencies so 1 depends on 0, 2 on 1, 3 on 2, etc. The operation queue is then forced to handle them in order. Set the maximum concurrent operation count to 1, and the queue is also guaranteed to be serial.