I'm calling a function on a thread in my project.
[self performSelectorInBackground:#selector(shortVibration) withObject: nil];
It's called in a loop.
I would like for the function to be called on its own thread.
I don't want it to be called at the same time (if this thread Call is in a loop... and it is)
So, I don't want to call my thread function again until the last one is done executing.
How can I do this?
don't want it to be called at the same time
That suggests a "serial queue". That could be a dispatch queue or an operation queue. But a serial queue is one that can run only one task at a time.
Or, you can decouple the loop from the repeating vibration and set up a timer to run while your loop progresses which will repeatedly call your vibration routine and then cancel the timer at the end of the loop. You can either use a standard NSTimer and have it dispatch the calls to whatever queue you want, or you can use a GCD timer, which you can schedule on a background queue.
It depends upon the details of how this vibration routine works and the nature of your loop. We'd need more detail (e.g. describe the broader problem and the nature of this "vibrate" routine) to help you further.
Perhaps you should take a look at NSOperationQueue which allows you to call functions in own created Queues. The Queues are executed on an own Thread.
For example:
NSOperationQueue *backgroundQueue = [[NSOperationQueue alloc]init];
backgroundQueue.maxConcurrentOperationCount = 1;
backgroundQueue.name = #"com.foo.bar";
[_backgroundQueue addOperationWithBlock:^{
do what you want.... here you also have access to properties in your class.
}];
With the operationCount you can handle the count of parallel executed operations. You can also create an own Subclass of NSOperation and execute your code there. Then you have to add the Operation like this [_backgroundQueue addOperation:SubclassOfNSOperation].
I hope this helps you a little. Out of your Question I can't get more information to help you more in detail. Post some code perhaps.
Related
I'll call a function many times in a very short time, and this function will not be executed on main thread.
For example, I'll call 5 times in one second. My requirement is:
The 1st call should be executed immediately.
The last call (5th) should be executed, other calls can be ignored.
One time one call, shouldn't execute this function on two threads at the same time.
So how?
Take a look at NSOperation and NSOperationQueue.
Wrap you function into NSOperation.
You can make NSOperationQueue serial and cancelAllOperations when add new
NSOperation in it.
Your operations will be executed in some
NSOperationQueue you've created so they will be on same thread ;)
Here is Apple docs for concurrency and NSOperation: https://developer.apple.com/library/mac/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationObjects/OperationObjects.html
What is the difference among these three methods
1. sleep(5)
2. dispatch_after(<#dispatch_time_t when#>, <#dispatch_queue_t queue#>, <#^(void)block#>)
3. performSelector:<#(SEL)#> withObject:<#(id)#> afterDelay:<#(NSTimeInterval)#>
In the second Method, how to choose the queue
Method 1 will pause execution of the current method for 5 seconds, so the following code:
NSLog(#"Before sleep");
sleep(5)
NSLog(#"after sleep");
would have 5 second delay between the two logs.
Method 2 uses grand central dispatch (GCD) to schedule execution of a block of code on a specified queue. This could be the main queue or a background queue - it is up to you to nominate a queue. Execution of the current method will continue immediately with the code after the dispatch_after, so the following code:
NSLog(#"Before dispatch");
dispatch_after(5,dispatch_get_main_queue, ^{
NSLog(#"in dispatch");
}
NSLog(#"after dispatch");
Would print
Before dispatch
after dispatch
and then 5 seconds later
in dispatch
Method 3 would have the same result as method 2, except that it invokes a method (selector) on the current thread using runloop scheduling rather than a block and Grand Central Dispatch.
Method 2 is the most "modern" - using blocks and GCD.
Sleep will lock the entire thread and wait for the amount of time to pass to continue. In general, this is a bad idea.
This is a good way to dispatch to a different thread, such as the main thread if you'll be updating UI. This one also makes more sense if there isn't a need for an entire function for the code to be executed.
This one is good if you've got a function that can occur on any thread. I personally prefer this method over the other two, even if you need to do work on a specific thread (you can dispatch within that method if necessary). It's cleaner and allows more testable and readable code.
As far as choosing queue, I recommend reading up on the Grand Central Dispatch docs.
Depends on what you want to accomplish.
The first method sleep(), blocks the current thread for the specified amount of time, and delays the execution of code.
The other two don't block the thread, as they schedule their actions.
The dispatch_after enqueues a block to be executed after a specified amount of time. After the time has passed it adds the block to the specified queue for execution.
Example of using dispatch_after on the main thread
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(<#delayInSeconds#> * NSEC_PER_SEC)),
dispatch_get_main_queue(), ^{
<#code to be executed after a specified delay#>
});
instead of using the main thread (dispatch_get_main_queue()), you can also use dispatch_get_global_queue to get a background queue on which to execute the block without blocking the interface.
The performSelector: withObject: afterDelay: method
registers with the runloop of its current context, and depends on that
runloop being run on a regular basis to perform correctly.
You can use it to schedule methods that require up to one argument, which you can pass along with the selector. It will perform the selector on the current thread.
You can also cancel this request by using cancelPreviousPerformRequestsWithTarget:selector:object:
I'm doing some customization in iOS, I'm subclassing a system class that executes a method asynchronously (presumably with dispatch_async)
Sample code:
-(void)originalAsyncMethod {
[super originalAsyncMethod];
dispatch_async(dispatch_get_main_queue(), ^{
//do something that needs to happen just after originalAsyncMethod finishes executing
});
}
Is there a way I can make sure my custom code runs AFTER the async super method is executed?
It's unclear to me wether this would be possible based on your question, but if you have direct access to the implementation of super, then this shouldn't be to hard to achieve.
First, assuming that you have access to the super class and that the super implementation also dispatches asynchronously to the main queue, then you don't actually have to do anything to get this working expectedly. When you use dispatch_get_main_queue() you're adding your dispatch block to the end of a serial queue on the main thread that is executed in FIFO (first in first out) order.
The second option is also pretty heavily reliant on having access to the super implementation, as it would require you manually create your own dispatch queue to execute tasks on. I think it goes without saying that if you use a serial dispatch queue then you have FIFO ordering in this queue same as you dispatch_get_main_queue(), only you wouldn't have to execute on the main thread.
And the last option I can think of wouldn't necessarily require you to modify the super class, but would require you to know the queue on which super was executing. (and still might not work right if it's a global queue) By using a dispatch_barrier, you could allow your super implementation to execute asynchronously on a concurrent queue knowing that the subclass dispatch block has also been added to the queue (via dispatch_barrier), and will be executed once the super dispatch (and any other previous submissions to the queue) has completed.
Quoting the docs
A dispatch barrier allows you to create a synchronization point within
a concurrent dispatch queue. When it encounters a barrier, a
concurrent queue delays the execution of the barrier block (or any
further blocks) until all blocks submitted before the barrier finish
executing. At that point, the barrier block executes by itself. Upon
completion, the queue resumes its normal execution behavior.
This is a two part question. Hope someone could reply with a complete answer.
NSOperations are powerful objects. They can be of two different types: non-concurrent or concurrent.
The first type runs synchronously. You can take advantage of a non-concurrent operations by adding them into a NSOperationQueue. The latter creates a thread(s) for you. The result consists in running that operation in a concurrent manner. The only caveat regards the lifecycle of such an operation. When its main method finishes, then it is removed form the queue. This is can be a problem when you deal with async APIs.
Now, what about concurrent operations? From Apple doc
If you want to implement a concurrent operation—that is, one that runs
asynchronously with respect to the calling thread—you must write
additional code to start the operation asynchronously. For example,
you might spawn a separate thread, call an asynchronous system
function, or do anything else to ensure that the start method starts
the task and returns immediately and, in all likelihood, before the
task is finished.
This is quite almost clear to me. They run asynchronously. But you must take the appropriate actions to ensure that they do.
What it is not clear to me is the following. Doc says:
Note: In OS X v10.6, operation queues ignore the value returned by
isConcurrent and always call the start method of your operation from a
separate thread.
What it really means? What happens if I add a concurrent operation in a NSOperationQueue?
Then, in this post Concurrent Operations, concurrent operations are used to download some HTTP content by means of NSURLConnection (in its async form). Operations are concurrent and included in a specific queue.
UrlDownloaderOperation * operation = [UrlDownloaderOperation urlDownloaderWithUrlString:url];
[_queue addOperation:operation];
Since NSURLConnection requires a loop to run, the author shunt the start method in the main thread (so I suppose adding the operation to the queue it has spawn a different one). In this manner, the main run loop can invoke the delegate included in the operation.
- (void)start
{
if (![NSThread isMainThread])
{
[self performSelectorOnMainThread:#selector(start) withObject:nil waitUntilDone:NO];
return;
}
[self willChangeValueForKey:#"isExecuting"];
_isExecuting = YES;
[self didChangeValueForKey:#"isExecuting"];
NSURLRequest * request = [NSURLRequest requestWithURL:_url];
_connection = [[NSURLConnection alloc] initWithRequest:request
delegate:self];
if (_connection == nil)
[self finish];
}
- (BOOL)isConcurrent
{
return YES;
}
// delegate method here...
My question is the following. Is this thread safe? The run loop listens for sources but invoked methods are called in a background thread. Am I wrong?
Edit
I've completed some tests on my own based on the code provided by Dave Dribin (see 1). I've noticed, as you wrote, that callbacks of NSURLConnection are called in the main thread.
Ok, but now I'm still very confusing. I'll try to explain my doubts.
Why including within a concurrent operation an async pattern where its callback are called in the main thread? Shunting the start method to the main thread it allows to execute callbacks in the main thread, and what about queues and operations? Where do I take advantage of threading mechanisms provided by GCD?
Hope this is clear.
This is kind of a long answer, but the short version is that what you're doing is totally fine and thread safe since you've forced the important part of the operation to run on the main thread.
Your first question was, "What happens if I add a concurrent operation in a NSOperationQueue?" As of iOS 4, NSOperationQueue uses GCD behind the scenes. When your operation reaches the top of the queue, it gets submitted to GCD, which manages a pool of private threads that grows and shrinks dynamically as needed. GCD assigns one of these threads to run the start method of your operation, and guarantees this thread will never be the main thread.
When the start method finishes in a concurrent operation, nothing special happens (which is the point). The queue will allow your operation to run forever until you set isFinished to YES and do the proper KVO willChange/didChange calls, regardless of the calling thread. Typically you'd make a method called finish to do that, which it looks like you have.
All this is fine and well, but there are some caveats involved if you need to observe or manipulate the thread on which your operation is running. The important thing to remember is this: don't mess with threads managed by GCD. You can't guarantee they'll live past the current frame of execution, and you definitely can't guarantee that subsequent delegate calls (i.e., from NSURLConnection) will occur on the same thread. In fact, they probably won't.
In your code sample, you've shunted start off to the main thread so you don't need to worry much about background threads (GCD or otherwise). When you create an NSURLConnection it gets scheduled on the current run loop, and all of its delegate methods will get called on that run loop's thread, meaning that starting the connection on the main thread guarantees its delegate callbacks also happen on the main thread. In this sense it's "thread safe" because almost nothing is actually happening on a background thread besides the start of the operation itself, which may actually be an advantage because GCD can immediately reclaim the thread and use it for something else.
Let's imagine what would happen if you didn't force start to run on the main thread and just used the thread given to you by GCD. A run loop can potentially hang forever if its thread disappears, such as when it gets reclaimed by GCD into its private pool. There's some techniques floating around for keeping the thread alive (such as adding an empty NSPort), but they don't apply to threads created by GCD, only to threads you create yourself and can guarantee the lifetime of.
The danger here is that under light load you actually can get away with running a run loop on a GCD thread and think everything is fine. Once you start running many parallel operations, especially if you need to cancel them midflight, you'll start to see operations that never complete and never deallocate, leaking memory. If you wanted to be completely safe, you'd need to create your own dedicated NSThread and keep the run loop going forever.
In the real world, it's much easier to do what you're doing and just run the connection on the main thread. Managing the connection consumes very little CPU and in most cases won't interfere with your UI, so there's very little to gain by running the connection completely in the background. The main thread's run loop is always running and you don't need to mess with it.
It is possible, however, to run an NSURLConnection connection entirely in the background using the dedicated thread method described above. For an example, check out JXHTTP, in particular the classes JXOperation and JXURLConnectionOperation
I am working on an iOS app that has a highly asynchronous design. There are circumstances where a single, conceptual "operation" may queue many child blocks that will be both executed asynchronously and receive their responses (calls to remote server) asynchronously. Any one of these child blocks could finish execution in an error state. Should an error occur in any child block, any other child blocks should be cancelled, the error state should be percolated up to the parent, and the parent's error-handling block should be executed.
I am wondering what design patterns and other tips that might be recommended for working within an environment like this?
I am aware of GCD's dispatch_group_async and dispatch_group_wait capabilities. It may be a flaw in this app's design, but I have not had good luck with dispatch_group_async because the group does not seem to be "sticky" to child blocks.
Thanks in advance!
There is a WWDC video (2012) that will probably help you out. It uses a custom NSOperationQueue and places the asynchronous blocks inside NSOperationsso you can keep a handle on the blocks and cancel remaining queued blocks.
An idea would be to have the error handling of the child blocks to call a method on the main thread in the class that handles the NSOperationQueue. The class could then cancel the rest appropriately. This way the child block only need to know about their own thread and the main thread. Here is a link to the video
https://developer.apple.com/videos/wwdc/2012/
The video is called "Building Concurrent User Interfaces on iOS". The relevant part is mainly in the second half, but you'll probably want to watch the whole thing as it puts it in context nicely.
EDIT:
If possible, I'd recommend handling the response in an embedded block, which wraps it nicely together, which is what I think you're after..
//Define an NSBlockOperation, and get weak reference to it
NSBlockOperation *blockOp = [[NSBlockOperation alloc]init];
__weak NSBlockOperation *weakBlockOp = blockOp;
//Define the block and add to the NSOperationQueue, when the view controller is popped
//we can call -[NSOperationQueue cancelAllOperations] which will cancel all pending threaded ops
[blockOp addExecutionBlock: ^{
//Once a block is executing, will need to put manual checks to see if cancel flag has been set otherwise
//the operation will not be cancelled. The check is rather pointless in this example, but if the
//block contained multiple lines of long running code it would make sense to do this at safe points
if (![weakBlockOp isCancelled]) {
//substitute code in here, possibly use *synchronous* NSURLConnection to get
//what you need. This code will block the thread until the server response
//completes. Hence not executing the following block and keeping it on the
//queue.
__block NSData *temp;
response = [NSData dataWithContentsOfURL:[NSURL URLWithString:urlString]];
[operationQueue addOperationWithBlock:^{
if (error) {
dispatch_async(dispatch_get_main_queue(), ^{
//Call selector on main thread to handle canceling
//Main thread can then use handle on NSOperationQueue
//to cancel the rest of the blocks
});
else {
//Continue executing relevant code....
}
}];
}
}];
[operationQueue addOperation:blockOp];
One pattern that I have come across since posting this question was using a semaphore to change what would be an asynchronous operation into a synchronous operation. This has been pretty useful. This blog post covers the concept in greater detail.
http://www.g8production.com/post/76942348764/wait-for-blocks-execution-using-a-dispatch-semaphore
There are many ways to achieve async behavior in cocoa.
GCD, NSOperationQueue, performSelectorAfterDelay, creating your own threads. There are appropriate times to use these mechanisms. Too long to discuss here, but something you mentioned in your post needs addressing.
Should an error occur in any child block, any other child blocks should be cancelled, the error state should be percolated up to the parent, and the parent's error-handling block should be executed.
Blocks cant throw errors up the stack. Period.